Jan 22 07:49:11 np0005592159 kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 22 07:49:11 np0005592159 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 22 07:49:11 np0005592159 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 07:49:11 np0005592159 kernel: BIOS-provided physical RAM map:
Jan 22 07:49:11 np0005592159 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 22 07:49:11 np0005592159 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 22 07:49:11 np0005592159 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 22 07:49:11 np0005592159 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 22 07:49:11 np0005592159 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 22 07:49:11 np0005592159 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 22 07:49:11 np0005592159 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 22 07:49:11 np0005592159 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 22 07:49:11 np0005592159 kernel: NX (Execute Disable) protection: active
Jan 22 07:49:11 np0005592159 kernel: APIC: Static calls initialized
Jan 22 07:49:11 np0005592159 kernel: SMBIOS 2.8 present.
Jan 22 07:49:11 np0005592159 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 22 07:49:11 np0005592159 kernel: Hypervisor detected: KVM
Jan 22 07:49:11 np0005592159 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 22 07:49:11 np0005592159 kernel: kvm-clock: using sched offset of 3328585702 cycles
Jan 22 07:49:11 np0005592159 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 22 07:49:11 np0005592159 kernel: tsc: Detected 2800.000 MHz processor
Jan 22 07:49:11 np0005592159 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 22 07:49:11 np0005592159 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 22 07:49:11 np0005592159 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 22 07:49:11 np0005592159 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 22 07:49:11 np0005592159 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 22 07:49:11 np0005592159 kernel: Using GB pages for direct mapping
Jan 22 07:49:11 np0005592159 kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 22 07:49:11 np0005592159 kernel: ACPI: Early table checksum verification disabled
Jan 22 07:49:11 np0005592159 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 22 07:49:11 np0005592159 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:49:11 np0005592159 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:49:11 np0005592159 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:49:11 np0005592159 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 22 07:49:11 np0005592159 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:49:11 np0005592159 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 22 07:49:11 np0005592159 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 22 07:49:11 np0005592159 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 22 07:49:11 np0005592159 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 22 07:49:11 np0005592159 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 22 07:49:11 np0005592159 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 22 07:49:11 np0005592159 kernel: No NUMA configuration found
Jan 22 07:49:11 np0005592159 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 22 07:49:11 np0005592159 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 22 07:49:11 np0005592159 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 22 07:49:11 np0005592159 kernel: Zone ranges:
Jan 22 07:49:11 np0005592159 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 22 07:49:11 np0005592159 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 22 07:49:11 np0005592159 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 22 07:49:11 np0005592159 kernel:  Device   empty
Jan 22 07:49:11 np0005592159 kernel: Movable zone start for each node
Jan 22 07:49:11 np0005592159 kernel: Early memory node ranges
Jan 22 07:49:11 np0005592159 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 22 07:49:11 np0005592159 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 22 07:49:11 np0005592159 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 22 07:49:11 np0005592159 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 22 07:49:11 np0005592159 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 22 07:49:11 np0005592159 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 22 07:49:11 np0005592159 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 22 07:49:11 np0005592159 kernel: ACPI: PM-Timer IO Port: 0x608
Jan 22 07:49:11 np0005592159 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 22 07:49:11 np0005592159 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 22 07:49:11 np0005592159 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 22 07:49:11 np0005592159 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 22 07:49:11 np0005592159 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 22 07:49:11 np0005592159 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 22 07:49:11 np0005592159 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 22 07:49:11 np0005592159 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 22 07:49:11 np0005592159 kernel: TSC deadline timer available
Jan 22 07:49:11 np0005592159 kernel: CPU topo: Max. logical packages:   8
Jan 22 07:49:11 np0005592159 kernel: CPU topo: Max. logical dies:       8
Jan 22 07:49:11 np0005592159 kernel: CPU topo: Max. dies per package:   1
Jan 22 07:49:11 np0005592159 kernel: CPU topo: Max. threads per core:   1
Jan 22 07:49:11 np0005592159 kernel: CPU topo: Num. cores per package:     1
Jan 22 07:49:11 np0005592159 kernel: CPU topo: Num. threads per package:   1
Jan 22 07:49:11 np0005592159 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 22 07:49:11 np0005592159 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 22 07:49:11 np0005592159 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 22 07:49:11 np0005592159 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 22 07:49:11 np0005592159 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 22 07:49:11 np0005592159 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 22 07:49:11 np0005592159 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 22 07:49:11 np0005592159 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 22 07:49:11 np0005592159 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 22 07:49:11 np0005592159 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 22 07:49:11 np0005592159 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 22 07:49:11 np0005592159 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 22 07:49:11 np0005592159 kernel: Booting paravirtualized kernel on KVM
Jan 22 07:49:11 np0005592159 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 22 07:49:11 np0005592159 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 22 07:49:11 np0005592159 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 22 07:49:11 np0005592159 kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 22 07:49:11 np0005592159 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 07:49:11 np0005592159 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 22 07:49:11 np0005592159 kernel: random: crng init done
Jan 22 07:49:11 np0005592159 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 22 07:49:11 np0005592159 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 22 07:49:11 np0005592159 kernel: Fallback order for Node 0: 0 
Jan 22 07:49:11 np0005592159 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 22 07:49:11 np0005592159 kernel: Policy zone: Normal
Jan 22 07:49:11 np0005592159 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 22 07:49:11 np0005592159 kernel: software IO TLB: area num 8.
Jan 22 07:49:11 np0005592159 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 22 07:49:11 np0005592159 kernel: ftrace: allocating 49417 entries in 194 pages
Jan 22 07:49:11 np0005592159 kernel: ftrace: allocated 194 pages with 3 groups
Jan 22 07:49:11 np0005592159 kernel: Dynamic Preempt: voluntary
Jan 22 07:49:11 np0005592159 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 22 07:49:11 np0005592159 kernel: rcu: #011RCU event tracing is enabled.
Jan 22 07:49:11 np0005592159 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 22 07:49:11 np0005592159 kernel: #011Trampoline variant of Tasks RCU enabled.
Jan 22 07:49:11 np0005592159 kernel: #011Rude variant of Tasks RCU enabled.
Jan 22 07:49:11 np0005592159 kernel: #011Tracing variant of Tasks RCU enabled.
Jan 22 07:49:11 np0005592159 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 22 07:49:11 np0005592159 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 22 07:49:11 np0005592159 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 07:49:11 np0005592159 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 07:49:11 np0005592159 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 22 07:49:11 np0005592159 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 22 07:49:11 np0005592159 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 22 07:49:11 np0005592159 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 22 07:49:11 np0005592159 kernel: Console: colour VGA+ 80x25
Jan 22 07:49:11 np0005592159 kernel: printk: console [ttyS0] enabled
Jan 22 07:49:11 np0005592159 kernel: ACPI: Core revision 20230331
Jan 22 07:49:11 np0005592159 kernel: APIC: Switch to symmetric I/O mode setup
Jan 22 07:49:11 np0005592159 kernel: x2apic enabled
Jan 22 07:49:11 np0005592159 kernel: APIC: Switched APIC routing to: physical x2apic
Jan 22 07:49:11 np0005592159 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 22 07:49:11 np0005592159 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 22 07:49:11 np0005592159 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 22 07:49:11 np0005592159 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 22 07:49:11 np0005592159 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 22 07:49:11 np0005592159 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 22 07:49:11 np0005592159 kernel: Spectre V2 : Mitigation: Retpolines
Jan 22 07:49:11 np0005592159 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 22 07:49:11 np0005592159 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 22 07:49:11 np0005592159 kernel: RETBleed: Mitigation: untrained return thunk
Jan 22 07:49:11 np0005592159 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 22 07:49:11 np0005592159 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 22 07:49:11 np0005592159 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 22 07:49:11 np0005592159 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 22 07:49:11 np0005592159 kernel: x86/bugs: return thunk changed
Jan 22 07:49:11 np0005592159 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 22 07:49:11 np0005592159 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 22 07:49:11 np0005592159 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 22 07:49:11 np0005592159 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 22 07:49:11 np0005592159 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 22 07:49:11 np0005592159 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 22 07:49:11 np0005592159 kernel: Freeing SMP alternatives memory: 40K
Jan 22 07:49:11 np0005592159 kernel: pid_max: default: 32768 minimum: 301
Jan 22 07:49:11 np0005592159 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 22 07:49:11 np0005592159 kernel: landlock: Up and running.
Jan 22 07:49:11 np0005592159 kernel: Yama: becoming mindful.
Jan 22 07:49:11 np0005592159 kernel: SELinux:  Initializing.
Jan 22 07:49:11 np0005592159 kernel: LSM support for eBPF active
Jan 22 07:49:11 np0005592159 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 22 07:49:11 np0005592159 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 22 07:49:11 np0005592159 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 22 07:49:11 np0005592159 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 22 07:49:11 np0005592159 kernel: ... version:                0
Jan 22 07:49:11 np0005592159 kernel: ... bit width:              48
Jan 22 07:49:11 np0005592159 kernel: ... generic registers:      6
Jan 22 07:49:11 np0005592159 kernel: ... value mask:             0000ffffffffffff
Jan 22 07:49:11 np0005592159 kernel: ... max period:             00007fffffffffff
Jan 22 07:49:11 np0005592159 kernel: ... fixed-purpose events:   0
Jan 22 07:49:11 np0005592159 kernel: ... event mask:             000000000000003f
Jan 22 07:49:11 np0005592159 kernel: signal: max sigframe size: 1776
Jan 22 07:49:11 np0005592159 kernel: rcu: Hierarchical SRCU implementation.
Jan 22 07:49:11 np0005592159 kernel: rcu: #011Max phase no-delay instances is 400.
Jan 22 07:49:11 np0005592159 kernel: smp: Bringing up secondary CPUs ...
Jan 22 07:49:11 np0005592159 kernel: smpboot: x86: Booting SMP configuration:
Jan 22 07:49:11 np0005592159 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 22 07:49:11 np0005592159 kernel: smp: Brought up 1 node, 8 CPUs
Jan 22 07:49:11 np0005592159 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 22 07:49:11 np0005592159 kernel: node 0 deferred pages initialised in 12ms
Jan 22 07:49:11 np0005592159 kernel: Memory: 7763860K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618360K reserved, 0K cma-reserved)
Jan 22 07:49:11 np0005592159 kernel: devtmpfs: initialized
Jan 22 07:49:11 np0005592159 kernel: x86/mm: Memory block size: 128MB
Jan 22 07:49:11 np0005592159 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 22 07:49:11 np0005592159 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 22 07:49:11 np0005592159 kernel: pinctrl core: initialized pinctrl subsystem
Jan 22 07:49:11 np0005592159 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 22 07:49:11 np0005592159 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 22 07:49:11 np0005592159 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 22 07:49:11 np0005592159 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 22 07:49:11 np0005592159 kernel: audit: initializing netlink subsys (disabled)
Jan 22 07:49:11 np0005592159 kernel: audit: type=2000 audit(1769086149.811:1): state=initialized audit_enabled=0 res=1
Jan 22 07:49:11 np0005592159 kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 22 07:49:11 np0005592159 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 22 07:49:11 np0005592159 kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 22 07:49:11 np0005592159 kernel: cpuidle: using governor menu
Jan 22 07:49:11 np0005592159 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 22 07:49:11 np0005592159 kernel: PCI: Using configuration type 1 for base access
Jan 22 07:49:11 np0005592159 kernel: PCI: Using configuration type 1 for extended access
Jan 22 07:49:11 np0005592159 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 22 07:49:11 np0005592159 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 22 07:49:11 np0005592159 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 22 07:49:11 np0005592159 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 22 07:49:11 np0005592159 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 22 07:49:11 np0005592159 kernel: Demotion targets for Node 0: null
Jan 22 07:49:11 np0005592159 kernel: cryptd: max_cpu_qlen set to 1000
Jan 22 07:49:11 np0005592159 kernel: ACPI: Added _OSI(Module Device)
Jan 22 07:49:11 np0005592159 kernel: ACPI: Added _OSI(Processor Device)
Jan 22 07:49:11 np0005592159 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 22 07:49:11 np0005592159 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 22 07:49:11 np0005592159 kernel: ACPI: Interpreter enabled
Jan 22 07:49:11 np0005592159 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 22 07:49:11 np0005592159 kernel: ACPI: Using IOAPIC for interrupt routing
Jan 22 07:49:11 np0005592159 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 22 07:49:11 np0005592159 kernel: PCI: Using E820 reservations for host bridge windows
Jan 22 07:49:11 np0005592159 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 22 07:49:11 np0005592159 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 22 07:49:11 np0005592159 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [3] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [4] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [5] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [6] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [7] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [8] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [9] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [10] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [11] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [12] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [13] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [14] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [15] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [16] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [17] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [18] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [19] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [20] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [21] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [22] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [23] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [24] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [25] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [26] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [27] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [28] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [29] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [30] registered
Jan 22 07:49:11 np0005592159 kernel: acpiphp: Slot [31] registered
Jan 22 07:49:11 np0005592159 kernel: PCI host bridge to bus 0000:00
Jan 22 07:49:11 np0005592159 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 22 07:49:11 np0005592159 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 22 07:49:11 np0005592159 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 22 07:49:11 np0005592159 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 22 07:49:11 np0005592159 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 22 07:49:11 np0005592159 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 22 07:49:11 np0005592159 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 22 07:49:11 np0005592159 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 22 07:49:11 np0005592159 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 22 07:49:11 np0005592159 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 22 07:49:11 np0005592159 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 22 07:49:11 np0005592159 kernel: iommu: Default domain type: Translated
Jan 22 07:49:11 np0005592159 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 22 07:49:11 np0005592159 kernel: SCSI subsystem initialized
Jan 22 07:49:11 np0005592159 kernel: ACPI: bus type USB registered
Jan 22 07:49:11 np0005592159 kernel: usbcore: registered new interface driver usbfs
Jan 22 07:49:11 np0005592159 kernel: usbcore: registered new interface driver hub
Jan 22 07:49:11 np0005592159 kernel: usbcore: registered new device driver usb
Jan 22 07:49:11 np0005592159 kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 22 07:49:11 np0005592159 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 22 07:49:11 np0005592159 kernel: PTP clock support registered
Jan 22 07:49:11 np0005592159 kernel: EDAC MC: Ver: 3.0.0
Jan 22 07:49:11 np0005592159 kernel: NetLabel: Initializing
Jan 22 07:49:11 np0005592159 kernel: NetLabel:  domain hash size = 128
Jan 22 07:49:11 np0005592159 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 22 07:49:11 np0005592159 kernel: NetLabel:  unlabeled traffic allowed by default
Jan 22 07:49:11 np0005592159 kernel: PCI: Using ACPI for IRQ routing
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 22 07:49:11 np0005592159 kernel: vgaarb: loaded
Jan 22 07:49:11 np0005592159 kernel: clocksource: Switched to clocksource kvm-clock
Jan 22 07:49:11 np0005592159 kernel: VFS: Disk quotas dquot_6.6.0
Jan 22 07:49:11 np0005592159 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 22 07:49:11 np0005592159 kernel: pnp: PnP ACPI init
Jan 22 07:49:11 np0005592159 kernel: pnp: PnP ACPI: found 5 devices
Jan 22 07:49:11 np0005592159 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 22 07:49:11 np0005592159 kernel: NET: Registered PF_INET protocol family
Jan 22 07:49:11 np0005592159 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 22 07:49:11 np0005592159 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 22 07:49:11 np0005592159 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 22 07:49:11 np0005592159 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 22 07:49:11 np0005592159 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 22 07:49:11 np0005592159 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 22 07:49:11 np0005592159 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 22 07:49:11 np0005592159 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 22 07:49:11 np0005592159 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 22 07:49:11 np0005592159 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 22 07:49:11 np0005592159 kernel: NET: Registered PF_XDP protocol family
Jan 22 07:49:11 np0005592159 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 22 07:49:11 np0005592159 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 22 07:49:11 np0005592159 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 22 07:49:11 np0005592159 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 22 07:49:11 np0005592159 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 22 07:49:11 np0005592159 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 22 07:49:11 np0005592159 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 113178 usecs
Jan 22 07:49:11 np0005592159 kernel: PCI: CLS 0 bytes, default 64
Jan 22 07:49:11 np0005592159 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 22 07:49:11 np0005592159 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 22 07:49:11 np0005592159 kernel: ACPI: bus type thunderbolt registered
Jan 22 07:49:11 np0005592159 kernel: Trying to unpack rootfs image as initramfs...
Jan 22 07:49:11 np0005592159 kernel: Initialise system trusted keyrings
Jan 22 07:49:11 np0005592159 kernel: Key type blacklist registered
Jan 22 07:49:11 np0005592159 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 22 07:49:11 np0005592159 kernel: zbud: loaded
Jan 22 07:49:11 np0005592159 kernel: integrity: Platform Keyring initialized
Jan 22 07:49:11 np0005592159 kernel: integrity: Machine keyring initialized
Jan 22 07:49:11 np0005592159 kernel: Freeing initrd memory: 87956K
Jan 22 07:49:11 np0005592159 kernel: NET: Registered PF_ALG protocol family
Jan 22 07:49:11 np0005592159 kernel: xor: automatically using best checksumming function   avx       
Jan 22 07:49:11 np0005592159 kernel: Key type asymmetric registered
Jan 22 07:49:11 np0005592159 kernel: Asymmetric key parser 'x509' registered
Jan 22 07:49:11 np0005592159 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 22 07:49:11 np0005592159 kernel: io scheduler mq-deadline registered
Jan 22 07:49:11 np0005592159 kernel: io scheduler kyber registered
Jan 22 07:49:11 np0005592159 kernel: io scheduler bfq registered
Jan 22 07:49:11 np0005592159 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 22 07:49:11 np0005592159 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 22 07:49:11 np0005592159 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 22 07:49:11 np0005592159 kernel: ACPI: button: Power Button [PWRF]
Jan 22 07:49:11 np0005592159 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 22 07:49:11 np0005592159 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 22 07:49:11 np0005592159 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 22 07:49:11 np0005592159 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 22 07:49:11 np0005592159 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 22 07:49:11 np0005592159 kernel: Non-volatile memory driver v1.3
Jan 22 07:49:11 np0005592159 kernel: rdac: device handler registered
Jan 22 07:49:11 np0005592159 kernel: hp_sw: device handler registered
Jan 22 07:49:11 np0005592159 kernel: emc: device handler registered
Jan 22 07:49:11 np0005592159 kernel: alua: device handler registered
Jan 22 07:49:11 np0005592159 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 22 07:49:11 np0005592159 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 22 07:49:11 np0005592159 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 22 07:49:11 np0005592159 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 22 07:49:11 np0005592159 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 22 07:49:11 np0005592159 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 22 07:49:11 np0005592159 kernel: usb usb1: Product: UHCI Host Controller
Jan 22 07:49:11 np0005592159 kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 22 07:49:11 np0005592159 kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 22 07:49:11 np0005592159 kernel: hub 1-0:1.0: USB hub found
Jan 22 07:49:11 np0005592159 kernel: hub 1-0:1.0: 2 ports detected
Jan 22 07:49:11 np0005592159 kernel: usbcore: registered new interface driver usbserial_generic
Jan 22 07:49:11 np0005592159 kernel: usbserial: USB Serial support registered for generic
Jan 22 07:49:11 np0005592159 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 22 07:49:11 np0005592159 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 22 07:49:11 np0005592159 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 22 07:49:11 np0005592159 kernel: mousedev: PS/2 mouse device common for all mice
Jan 22 07:49:11 np0005592159 kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 22 07:49:11 np0005592159 kernel: rtc_cmos 00:04: registered as rtc0
Jan 22 07:49:11 np0005592159 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 22 07:49:11 np0005592159 kernel: rtc_cmos 00:04: setting system clock to 2026-01-22T12:49:10 UTC (1769086150)
Jan 22 07:49:11 np0005592159 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 22 07:49:11 np0005592159 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 22 07:49:11 np0005592159 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 22 07:49:11 np0005592159 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 22 07:49:11 np0005592159 kernel: usbcore: registered new interface driver usbhid
Jan 22 07:49:11 np0005592159 kernel: usbhid: USB HID core driver
Jan 22 07:49:11 np0005592159 kernel: drop_monitor: Initializing network drop monitor service
Jan 22 07:49:11 np0005592159 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 22 07:49:11 np0005592159 kernel: Initializing XFRM netlink socket
Jan 22 07:49:11 np0005592159 kernel: NET: Registered PF_INET6 protocol family
Jan 22 07:49:11 np0005592159 kernel: Segment Routing with IPv6
Jan 22 07:49:11 np0005592159 kernel: NET: Registered PF_PACKET protocol family
Jan 22 07:49:11 np0005592159 kernel: mpls_gso: MPLS GSO support
Jan 22 07:49:11 np0005592159 kernel: IPI shorthand broadcast: enabled
Jan 22 07:49:11 np0005592159 kernel: AVX2 version of gcm_enc/dec engaged.
Jan 22 07:49:11 np0005592159 kernel: AES CTR mode by8 optimization enabled
Jan 22 07:49:11 np0005592159 kernel: sched_clock: Marking stable (1305001570, 145978590)->(1582140909, -131160749)
Jan 22 07:49:11 np0005592159 kernel: registered taskstats version 1
Jan 22 07:49:11 np0005592159 kernel: Loading compiled-in X.509 certificates
Jan 22 07:49:11 np0005592159 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 22 07:49:11 np0005592159 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 22 07:49:11 np0005592159 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 22 07:49:11 np0005592159 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 22 07:49:11 np0005592159 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 22 07:49:11 np0005592159 kernel: Demotion targets for Node 0: null
Jan 22 07:49:11 np0005592159 kernel: page_owner is disabled
Jan 22 07:49:11 np0005592159 kernel: Key type .fscrypt registered
Jan 22 07:49:11 np0005592159 kernel: Key type fscrypt-provisioning registered
Jan 22 07:49:11 np0005592159 kernel: Key type big_key registered
Jan 22 07:49:11 np0005592159 kernel: Key type encrypted registered
Jan 22 07:49:11 np0005592159 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 22 07:49:11 np0005592159 kernel: Loading compiled-in module X.509 certificates
Jan 22 07:49:11 np0005592159 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 22 07:49:11 np0005592159 kernel: ima: Allocated hash algorithm: sha256
Jan 22 07:49:11 np0005592159 kernel: ima: No architecture policies found
Jan 22 07:49:11 np0005592159 kernel: evm: Initialising EVM extended attributes:
Jan 22 07:49:11 np0005592159 kernel: evm: security.selinux
Jan 22 07:49:11 np0005592159 kernel: evm: security.SMACK64 (disabled)
Jan 22 07:49:11 np0005592159 kernel: evm: security.SMACK64EXEC (disabled)
Jan 22 07:49:11 np0005592159 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 22 07:49:11 np0005592159 kernel: evm: security.SMACK64MMAP (disabled)
Jan 22 07:49:11 np0005592159 kernel: evm: security.apparmor (disabled)
Jan 22 07:49:11 np0005592159 kernel: evm: security.ima
Jan 22 07:49:11 np0005592159 kernel: evm: security.capability
Jan 22 07:49:11 np0005592159 kernel: evm: HMAC attrs: 0x1
Jan 22 07:49:11 np0005592159 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 22 07:49:11 np0005592159 kernel: Running certificate verification RSA selftest
Jan 22 07:49:11 np0005592159 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 22 07:49:11 np0005592159 kernel: Running certificate verification ECDSA selftest
Jan 22 07:49:11 np0005592159 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 22 07:49:11 np0005592159 kernel: clk: Disabling unused clocks
Jan 22 07:49:11 np0005592159 kernel: Freeing unused decrypted memory: 2028K
Jan 22 07:49:11 np0005592159 kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 22 07:49:11 np0005592159 kernel: Write protecting the kernel read-only data: 30720k
Jan 22 07:49:11 np0005592159 kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 22 07:49:11 np0005592159 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 22 07:49:11 np0005592159 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 22 07:49:11 np0005592159 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 22 07:49:11 np0005592159 kernel: usb 1-1: Product: QEMU USB Tablet
Jan 22 07:49:11 np0005592159 kernel: usb 1-1: Manufacturer: QEMU
Jan 22 07:49:11 np0005592159 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 22 07:49:11 np0005592159 kernel: Run /init as init process
Jan 22 07:49:11 np0005592159 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 22 07:49:11 np0005592159 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 22 07:49:11 np0005592159 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 22 07:49:11 np0005592159 systemd: Detected virtualization kvm.
Jan 22 07:49:11 np0005592159 systemd: Detected architecture x86-64.
Jan 22 07:49:11 np0005592159 systemd: Running in initrd.
Jan 22 07:49:11 np0005592159 systemd: No hostname configured, using default hostname.
Jan 22 07:49:11 np0005592159 systemd: Hostname set to <localhost>.
Jan 22 07:49:11 np0005592159 systemd: Initializing machine ID from VM UUID.
Jan 22 07:49:11 np0005592159 systemd: Queued start job for default target Initrd Default Target.
Jan 22 07:49:11 np0005592159 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 22 07:49:11 np0005592159 systemd: Reached target Local Encrypted Volumes.
Jan 22 07:49:11 np0005592159 systemd: Reached target Initrd /usr File System.
Jan 22 07:49:11 np0005592159 systemd: Reached target Local File Systems.
Jan 22 07:49:11 np0005592159 systemd: Reached target Path Units.
Jan 22 07:49:11 np0005592159 systemd: Reached target Slice Units.
Jan 22 07:49:11 np0005592159 systemd: Reached target Swaps.
Jan 22 07:49:11 np0005592159 systemd: Reached target Timer Units.
Jan 22 07:49:11 np0005592159 systemd: Listening on D-Bus System Message Bus Socket.
Jan 22 07:49:11 np0005592159 systemd: Listening on Journal Socket (/dev/log).
Jan 22 07:49:11 np0005592159 systemd: Listening on Journal Socket.
Jan 22 07:49:11 np0005592159 systemd: Listening on udev Control Socket.
Jan 22 07:49:11 np0005592159 systemd: Listening on udev Kernel Socket.
Jan 22 07:49:11 np0005592159 systemd: Reached target Socket Units.
Jan 22 07:49:11 np0005592159 systemd: Starting Create List of Static Device Nodes...
Jan 22 07:49:11 np0005592159 systemd: Starting Journal Service...
Jan 22 07:49:11 np0005592159 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 22 07:49:11 np0005592159 systemd: Starting Apply Kernel Variables...
Jan 22 07:49:11 np0005592159 systemd: Starting Create System Users...
Jan 22 07:49:11 np0005592159 systemd: Starting Setup Virtual Console...
Jan 22 07:49:11 np0005592159 systemd: Finished Create List of Static Device Nodes.
Jan 22 07:49:11 np0005592159 systemd: Finished Apply Kernel Variables.
Jan 22 07:49:11 np0005592159 systemd: Finished Create System Users.
Jan 22 07:49:11 np0005592159 systemd: Starting Create Static Device Nodes in /dev...
Jan 22 07:49:11 np0005592159 systemd-journald[307]: Journal started
Jan 22 07:49:11 np0005592159 systemd-journald[307]: Runtime Journal (/run/log/journal/5492a354d1924c48860299be1884b049) is 8.0M, max 153.6M, 145.6M free.
Jan 22 07:49:11 np0005592159 systemd-sysusers[310]: Creating group 'users' with GID 100.
Jan 22 07:49:11 np0005592159 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Jan 22 07:49:11 np0005592159 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 22 07:49:11 np0005592159 systemd: Started Journal Service.
Jan 22 07:49:11 np0005592159 systemd[1]: Starting Create Volatile Files and Directories...
Jan 22 07:49:11 np0005592159 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 22 07:49:11 np0005592159 systemd[1]: Finished Setup Virtual Console.
Jan 22 07:49:11 np0005592159 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 22 07:49:11 np0005592159 systemd[1]: Starting dracut cmdline hook...
Jan 22 07:49:11 np0005592159 dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Jan 22 07:49:11 np0005592159 dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 22 07:49:11 np0005592159 systemd[1]: Finished Create Volatile Files and Directories.
Jan 22 07:49:11 np0005592159 systemd[1]: Finished dracut cmdline hook.
Jan 22 07:49:11 np0005592159 systemd[1]: Starting dracut pre-udev hook...
Jan 22 07:49:11 np0005592159 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 22 07:49:11 np0005592159 kernel: device-mapper: uevent: version 1.0.3
Jan 22 07:49:11 np0005592159 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 22 07:49:11 np0005592159 kernel: RPC: Registered named UNIX socket transport module.
Jan 22 07:49:11 np0005592159 kernel: RPC: Registered udp transport module.
Jan 22 07:49:11 np0005592159 kernel: RPC: Registered tcp transport module.
Jan 22 07:49:11 np0005592159 kernel: RPC: Registered tcp-with-tls transport module.
Jan 22 07:49:11 np0005592159 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 22 07:49:12 np0005592159 rpc.statd[445]: Version 2.5.4 starting
Jan 22 07:49:12 np0005592159 rpc.statd[445]: Initializing NSM state
Jan 22 07:49:12 np0005592159 rpc.idmapd[450]: Setting log level to 0
Jan 22 07:49:12 np0005592159 systemd[1]: Finished dracut pre-udev hook.
Jan 22 07:49:12 np0005592159 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 22 07:49:12 np0005592159 systemd-udevd[463]: Using default interface naming scheme 'rhel-9.0'.
Jan 22 07:49:12 np0005592159 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 22 07:49:12 np0005592159 systemd[1]: Starting dracut pre-trigger hook...
Jan 22 07:49:12 np0005592159 systemd[1]: Finished dracut pre-trigger hook.
Jan 22 07:49:12 np0005592159 systemd[1]: Starting Coldplug All udev Devices...
Jan 22 07:49:12 np0005592159 systemd[1]: Created slice Slice /system/modprobe.
Jan 22 07:49:12 np0005592159 systemd[1]: Starting Load Kernel Module configfs...
Jan 22 07:49:12 np0005592159 systemd[1]: Finished Coldplug All udev Devices.
Jan 22 07:49:12 np0005592159 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 07:49:12 np0005592159 systemd[1]: Finished Load Kernel Module configfs.
Jan 22 07:49:12 np0005592159 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 22 07:49:12 np0005592159 systemd[1]: Mounting Kernel Configuration File System...
Jan 22 07:49:12 np0005592159 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 22 07:49:12 np0005592159 systemd[1]: Reached target Network.
Jan 22 07:49:12 np0005592159 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 22 07:49:12 np0005592159 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 22 07:49:12 np0005592159 kernel: vda: vda1
Jan 22 07:49:12 np0005592159 systemd[1]: Starting dracut initqueue hook...
Jan 22 07:49:12 np0005592159 systemd[1]: Mounted Kernel Configuration File System.
Jan 22 07:49:12 np0005592159 systemd[1]: Reached target System Initialization.
Jan 22 07:49:12 np0005592159 systemd[1]: Reached target Basic System.
Jan 22 07:49:12 np0005592159 kernel: scsi host0: ata_piix
Jan 22 07:49:12 np0005592159 kernel: scsi host1: ata_piix
Jan 22 07:49:12 np0005592159 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 22 07:49:12 np0005592159 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 22 07:49:12 np0005592159 systemd-udevd[489]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 07:49:12 np0005592159 systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 22 07:49:12 np0005592159 systemd[1]: Reached target Initrd Root Device.
Jan 22 07:49:12 np0005592159 kernel: ata1: found unknown device (class 0)
Jan 22 07:49:12 np0005592159 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 22 07:49:12 np0005592159 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 22 07:49:12 np0005592159 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 22 07:49:12 np0005592159 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 22 07:49:12 np0005592159 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 22 07:49:12 np0005592159 systemd[1]: Finished dracut initqueue hook.
Jan 22 07:49:12 np0005592159 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 22 07:49:12 np0005592159 systemd[1]: Reached target Remote Encrypted Volumes.
Jan 22 07:49:12 np0005592159 systemd[1]: Reached target Remote File Systems.
Jan 22 07:49:12 np0005592159 systemd[1]: Starting dracut pre-mount hook...
Jan 22 07:49:12 np0005592159 systemd[1]: Finished dracut pre-mount hook.
Jan 22 07:49:12 np0005592159 systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 22 07:49:12 np0005592159 systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Jan 22 07:49:12 np0005592159 systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 22 07:49:12 np0005592159 systemd[1]: Mounting /sysroot...
Jan 22 07:49:13 np0005592159 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 22 07:49:13 np0005592159 kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 22 07:49:13 np0005592159 kernel: XFS (vda1): Ending clean mount
Jan 22 07:49:13 np0005592159 systemd[1]: Mounted /sysroot.
Jan 22 07:49:13 np0005592159 systemd[1]: Reached target Initrd Root File System.
Jan 22 07:49:13 np0005592159 systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 22 07:49:13 np0005592159 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 22 07:49:13 np0005592159 systemd[1]: Reached target Initrd File Systems.
Jan 22 07:49:13 np0005592159 systemd[1]: Reached target Initrd Default Target.
Jan 22 07:49:13 np0005592159 systemd[1]: Starting dracut mount hook...
Jan 22 07:49:13 np0005592159 systemd[1]: Finished dracut mount hook.
Jan 22 07:49:13 np0005592159 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 22 07:49:13 np0005592159 rpc.idmapd[450]: exiting on signal 15
Jan 22 07:49:13 np0005592159 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 22 07:49:13 np0005592159 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Network.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Timer Units.
Jan 22 07:49:13 np0005592159 systemd[1]: dbus.socket: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 22 07:49:13 np0005592159 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Initrd Default Target.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Basic System.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Initrd Root Device.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Initrd /usr File System.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Path Units.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Remote File Systems.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Slice Units.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Socket Units.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target System Initialization.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Local File Systems.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Swaps.
Jan 22 07:49:13 np0005592159 systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped dracut mount hook.
Jan 22 07:49:13 np0005592159 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped dracut pre-mount hook.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped target Local Encrypted Volumes.
Jan 22 07:49:13 np0005592159 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 22 07:49:13 np0005592159 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped dracut initqueue hook.
Jan 22 07:49:13 np0005592159 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped Apply Kernel Variables.
Jan 22 07:49:13 np0005592159 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped Create Volatile Files and Directories.
Jan 22 07:49:13 np0005592159 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped Coldplug All udev Devices.
Jan 22 07:49:13 np0005592159 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped dracut pre-trigger hook.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 22 07:49:13 np0005592159 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped Setup Virtual Console.
Jan 22 07:49:13 np0005592159 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 22 07:49:13 np0005592159 systemd[1]: systemd-udevd.service: Consumed 1.053s CPU time.
Jan 22 07:49:13 np0005592159 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 22 07:49:13 np0005592159 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Closed udev Control Socket.
Jan 22 07:49:13 np0005592159 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Closed udev Kernel Socket.
Jan 22 07:49:13 np0005592159 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped dracut pre-udev hook.
Jan 22 07:49:13 np0005592159 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped dracut cmdline hook.
Jan 22 07:49:13 np0005592159 systemd[1]: Starting Cleanup udev Database...
Jan 22 07:49:13 np0005592159 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 22 07:49:13 np0005592159 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped Create List of Static Device Nodes.
Jan 22 07:49:13 np0005592159 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Stopped Create System Users.
Jan 22 07:49:13 np0005592159 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 22 07:49:13 np0005592159 systemd[1]: Finished Cleanup udev Database.
Jan 22 07:49:13 np0005592159 systemd[1]: Reached target Switch Root.
Jan 22 07:49:13 np0005592159 systemd[1]: Starting Switch Root...
Jan 22 07:49:13 np0005592159 systemd[1]: Switching root.
Jan 22 07:49:13 np0005592159 systemd-journald[307]: Journal stopped
Jan 22 07:49:14 np0005592159 systemd-journald: Received SIGTERM from PID 1 (systemd).
Jan 22 07:49:14 np0005592159 kernel: audit: type=1404 audit(1769086153.822:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 22 07:49:14 np0005592159 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 07:49:14 np0005592159 kernel: SELinux:  policy capability open_perms=1
Jan 22 07:49:14 np0005592159 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 07:49:14 np0005592159 kernel: SELinux:  policy capability always_check_network=0
Jan 22 07:49:14 np0005592159 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 07:49:14 np0005592159 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 07:49:14 np0005592159 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 07:49:14 np0005592159 kernel: audit: type=1403 audit(1769086153.961:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 22 07:49:14 np0005592159 systemd: Successfully loaded SELinux policy in 143.081ms.
Jan 22 07:49:14 np0005592159 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26ms.
Jan 22 07:49:14 np0005592159 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 22 07:49:14 np0005592159 systemd: Detected virtualization kvm.
Jan 22 07:49:14 np0005592159 systemd: Detected architecture x86-64.
Jan 22 07:49:14 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 07:49:14 np0005592159 systemd: initrd-switch-root.service: Deactivated successfully.
Jan 22 07:49:14 np0005592159 systemd: Stopped Switch Root.
Jan 22 07:49:14 np0005592159 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 22 07:49:14 np0005592159 systemd: Created slice Slice /system/getty.
Jan 22 07:49:14 np0005592159 systemd: Created slice Slice /system/serial-getty.
Jan 22 07:49:14 np0005592159 systemd: Created slice Slice /system/sshd-keygen.
Jan 22 07:49:14 np0005592159 systemd: Created slice User and Session Slice.
Jan 22 07:49:14 np0005592159 systemd: Started Dispatch Password Requests to Console Directory Watch.
Jan 22 07:49:14 np0005592159 systemd: Started Forward Password Requests to Wall Directory Watch.
Jan 22 07:49:14 np0005592159 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 22 07:49:14 np0005592159 systemd: Reached target Local Encrypted Volumes.
Jan 22 07:49:14 np0005592159 systemd: Stopped target Switch Root.
Jan 22 07:49:14 np0005592159 systemd: Stopped target Initrd File Systems.
Jan 22 07:49:14 np0005592159 systemd: Stopped target Initrd Root File System.
Jan 22 07:49:14 np0005592159 systemd: Reached target Local Integrity Protected Volumes.
Jan 22 07:49:14 np0005592159 systemd: Reached target Path Units.
Jan 22 07:49:14 np0005592159 systemd: Reached target rpc_pipefs.target.
Jan 22 07:49:14 np0005592159 systemd: Reached target Slice Units.
Jan 22 07:49:14 np0005592159 systemd: Reached target Swaps.
Jan 22 07:49:14 np0005592159 systemd: Reached target Local Verity Protected Volumes.
Jan 22 07:49:14 np0005592159 systemd: Listening on RPCbind Server Activation Socket.
Jan 22 07:49:14 np0005592159 systemd: Reached target RPC Port Mapper.
Jan 22 07:49:14 np0005592159 systemd: Listening on Process Core Dump Socket.
Jan 22 07:49:14 np0005592159 systemd: Listening on initctl Compatibility Named Pipe.
Jan 22 07:49:14 np0005592159 systemd: Listening on udev Control Socket.
Jan 22 07:49:14 np0005592159 systemd: Listening on udev Kernel Socket.
Jan 22 07:49:14 np0005592159 systemd: Mounting Huge Pages File System...
Jan 22 07:49:14 np0005592159 systemd: Mounting POSIX Message Queue File System...
Jan 22 07:49:14 np0005592159 systemd: Mounting Kernel Debug File System...
Jan 22 07:49:14 np0005592159 systemd: Mounting Kernel Trace File System...
Jan 22 07:49:14 np0005592159 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 22 07:49:14 np0005592159 systemd: Starting Create List of Static Device Nodes...
Jan 22 07:49:14 np0005592159 systemd: Starting Load Kernel Module configfs...
Jan 22 07:49:14 np0005592159 systemd: Starting Load Kernel Module drm...
Jan 22 07:49:14 np0005592159 systemd: Starting Load Kernel Module efi_pstore...
Jan 22 07:49:14 np0005592159 systemd: Starting Load Kernel Module fuse...
Jan 22 07:49:14 np0005592159 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 22 07:49:14 np0005592159 systemd: systemd-fsck-root.service: Deactivated successfully.
Jan 22 07:49:14 np0005592159 systemd: Stopped File System Check on Root Device.
Jan 22 07:49:14 np0005592159 systemd: Stopped Journal Service.
Jan 22 07:49:14 np0005592159 kernel: fuse: init (API version 7.37)
Jan 22 07:49:14 np0005592159 systemd: Starting Journal Service...
Jan 22 07:49:14 np0005592159 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 22 07:49:14 np0005592159 systemd: Starting Generate network units from Kernel command line...
Jan 22 07:49:14 np0005592159 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 07:49:14 np0005592159 systemd: Starting Remount Root and Kernel File Systems...
Jan 22 07:49:14 np0005592159 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 22 07:49:14 np0005592159 systemd: Starting Apply Kernel Variables...
Jan 22 07:49:14 np0005592159 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 22 07:49:14 np0005592159 systemd: Starting Coldplug All udev Devices...
Jan 22 07:49:14 np0005592159 systemd: Mounted Huge Pages File System.
Jan 22 07:49:14 np0005592159 systemd: Mounted POSIX Message Queue File System.
Jan 22 07:49:14 np0005592159 systemd: Mounted Kernel Debug File System.
Jan 22 07:49:14 np0005592159 systemd-journald[675]: Journal started
Jan 22 07:49:14 np0005592159 systemd-journald[675]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 22 07:49:14 np0005592159 systemd: Mounted Kernel Trace File System.
Jan 22 07:49:14 np0005592159 systemd[1]: Queued start job for default target Multi-User System.
Jan 22 07:49:14 np0005592159 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 22 07:49:14 np0005592159 systemd: Started Journal Service.
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Create List of Static Device Nodes.
Jan 22 07:49:14 np0005592159 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Load Kernel Module configfs.
Jan 22 07:49:14 np0005592159 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 22 07:49:14 np0005592159 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Load Kernel Module fuse.
Jan 22 07:49:14 np0005592159 kernel: ACPI: bus type drm_connector registered
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 22 07:49:14 np0005592159 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Load Kernel Module drm.
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Generate network units from Kernel command line.
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Apply Kernel Variables.
Jan 22 07:49:14 np0005592159 systemd[1]: Mounting FUSE Control File System...
Jan 22 07:49:14 np0005592159 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 22 07:49:14 np0005592159 systemd[1]: Starting Rebuild Hardware Database...
Jan 22 07:49:14 np0005592159 systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 22 07:49:14 np0005592159 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 22 07:49:14 np0005592159 systemd[1]: Starting Load/Save OS Random Seed...
Jan 22 07:49:14 np0005592159 systemd[1]: Starting Create System Users...
Jan 22 07:49:14 np0005592159 systemd-journald[675]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 22 07:49:14 np0005592159 systemd-journald[675]: Received client request to flush runtime journal.
Jan 22 07:49:14 np0005592159 systemd[1]: Mounted FUSE Control File System.
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Load/Save OS Random Seed.
Jan 22 07:49:14 np0005592159 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Coldplug All udev Devices.
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Create System Users.
Jan 22 07:49:14 np0005592159 systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 22 07:49:14 np0005592159 systemd[1]: Reached target Preparation for Local File Systems.
Jan 22 07:49:14 np0005592159 systemd[1]: Reached target Local File Systems.
Jan 22 07:49:14 np0005592159 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 22 07:49:14 np0005592159 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 22 07:49:14 np0005592159 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 22 07:49:14 np0005592159 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 22 07:49:14 np0005592159 systemd[1]: Starting Automatic Boot Loader Update...
Jan 22 07:49:14 np0005592159 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 22 07:49:14 np0005592159 systemd[1]: Starting Create Volatile Files and Directories...
Jan 22 07:49:14 np0005592159 bootctl[693]: Couldn't find EFI system partition, skipping.
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Automatic Boot Loader Update.
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Create Volatile Files and Directories.
Jan 22 07:49:14 np0005592159 systemd[1]: Starting Security Auditing Service...
Jan 22 07:49:14 np0005592159 systemd[1]: Starting RPC Bind...
Jan 22 07:49:14 np0005592159 systemd[1]: Starting Rebuild Journal Catalog...
Jan 22 07:49:14 np0005592159 auditd[699]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 22 07:49:14 np0005592159 auditd[699]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Rebuild Journal Catalog.
Jan 22 07:49:14 np0005592159 systemd[1]: Started RPC Bind.
Jan 22 07:49:14 np0005592159 augenrules[704]: /sbin/augenrules: No change
Jan 22 07:49:14 np0005592159 augenrules[719]: No rules
Jan 22 07:49:14 np0005592159 augenrules[719]: enabled 1
Jan 22 07:49:14 np0005592159 augenrules[719]: failure 1
Jan 22 07:49:14 np0005592159 augenrules[719]: pid 699
Jan 22 07:49:14 np0005592159 augenrules[719]: rate_limit 0
Jan 22 07:49:14 np0005592159 augenrules[719]: backlog_limit 8192
Jan 22 07:49:14 np0005592159 augenrules[719]: lost 0
Jan 22 07:49:14 np0005592159 augenrules[719]: backlog 3
Jan 22 07:49:14 np0005592159 augenrules[719]: backlog_wait_time 60000
Jan 22 07:49:14 np0005592159 augenrules[719]: backlog_wait_time_actual 0
Jan 22 07:49:14 np0005592159 augenrules[719]: enabled 1
Jan 22 07:49:14 np0005592159 augenrules[719]: failure 1
Jan 22 07:49:14 np0005592159 augenrules[719]: pid 699
Jan 22 07:49:14 np0005592159 augenrules[719]: rate_limit 0
Jan 22 07:49:14 np0005592159 augenrules[719]: backlog_limit 8192
Jan 22 07:49:14 np0005592159 augenrules[719]: lost 0
Jan 22 07:49:14 np0005592159 augenrules[719]: backlog 2
Jan 22 07:49:14 np0005592159 augenrules[719]: backlog_wait_time 60000
Jan 22 07:49:14 np0005592159 augenrules[719]: backlog_wait_time_actual 0
Jan 22 07:49:14 np0005592159 augenrules[719]: enabled 1
Jan 22 07:49:14 np0005592159 augenrules[719]: failure 1
Jan 22 07:49:14 np0005592159 augenrules[719]: pid 699
Jan 22 07:49:14 np0005592159 augenrules[719]: rate_limit 0
Jan 22 07:49:14 np0005592159 augenrules[719]: backlog_limit 8192
Jan 22 07:49:14 np0005592159 augenrules[719]: lost 0
Jan 22 07:49:14 np0005592159 augenrules[719]: backlog 2
Jan 22 07:49:14 np0005592159 augenrules[719]: backlog_wait_time 60000
Jan 22 07:49:14 np0005592159 augenrules[719]: backlog_wait_time_actual 0
Jan 22 07:49:14 np0005592159 systemd[1]: Started Security Auditing Service.
Jan 22 07:49:14 np0005592159 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 22 07:49:14 np0005592159 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 22 07:49:15 np0005592159 systemd[1]: Finished Rebuild Hardware Database.
Jan 22 07:49:15 np0005592159 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 22 07:49:15 np0005592159 systemd[1]: Starting Update is Completed...
Jan 22 07:49:15 np0005592159 systemd[1]: Finished Update is Completed.
Jan 22 07:49:15 np0005592159 systemd-udevd[727]: Using default interface naming scheme 'rhel-9.0'.
Jan 22 07:49:15 np0005592159 systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 22 07:49:15 np0005592159 systemd[1]: Reached target System Initialization.
Jan 22 07:49:15 np0005592159 systemd[1]: Started dnf makecache --timer.
Jan 22 07:49:15 np0005592159 systemd[1]: Started Daily rotation of log files.
Jan 22 07:49:15 np0005592159 systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 22 07:49:15 np0005592159 systemd[1]: Reached target Timer Units.
Jan 22 07:49:15 np0005592159 systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 22 07:49:15 np0005592159 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 22 07:49:15 np0005592159 systemd[1]: Reached target Socket Units.
Jan 22 07:49:15 np0005592159 systemd[1]: Starting D-Bus System Message Bus...
Jan 22 07:49:15 np0005592159 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 07:49:15 np0005592159 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 22 07:49:15 np0005592159 systemd[1]: Starting Load Kernel Module configfs...
Jan 22 07:49:15 np0005592159 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 22 07:49:15 np0005592159 systemd[1]: Finished Load Kernel Module configfs.
Jan 22 07:49:15 np0005592159 systemd-udevd[741]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 07:49:15 np0005592159 systemd[1]: Started D-Bus System Message Bus.
Jan 22 07:49:15 np0005592159 systemd[1]: Reached target Basic System.
Jan 22 07:49:15 np0005592159 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 22 07:49:15 np0005592159 dbus-broker-lau[760]: Ready
Jan 22 07:49:15 np0005592159 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 22 07:49:15 np0005592159 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 22 07:49:15 np0005592159 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 22 07:49:15 np0005592159 systemd[1]: Starting NTP client/server...
Jan 22 07:49:15 np0005592159 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 22 07:49:15 np0005592159 systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 22 07:49:15 np0005592159 systemd[1]: Starting IPv4 firewall with iptables...
Jan 22 07:49:15 np0005592159 systemd[1]: Started irqbalance daemon.
Jan 22 07:49:15 np0005592159 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 22 07:49:15 np0005592159 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 07:49:15 np0005592159 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 07:49:15 np0005592159 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 07:49:15 np0005592159 systemd[1]: Reached target sshd-keygen.target.
Jan 22 07:49:15 np0005592159 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 22 07:49:15 np0005592159 systemd[1]: Reached target User and Group Name Lookups.
Jan 22 07:49:15 np0005592159 systemd[1]: Starting User Login Management...
Jan 22 07:49:15 np0005592159 systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 22 07:49:15 np0005592159 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 22 07:49:15 np0005592159 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 22 07:49:15 np0005592159 kernel: Console: switching to colour dummy device 80x25
Jan 22 07:49:15 np0005592159 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 22 07:49:15 np0005592159 kernel: [drm] features: -context_init
Jan 22 07:49:15 np0005592159 kernel: [drm] number of scanouts: 1
Jan 22 07:49:15 np0005592159 kernel: [drm] number of cap sets: 0
Jan 22 07:49:15 np0005592159 chronyd[798]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 22 07:49:15 np0005592159 chronyd[798]: Loaded 0 symmetric keys
Jan 22 07:49:15 np0005592159 chronyd[798]: Using right/UTC timezone to obtain leap second data
Jan 22 07:49:15 np0005592159 chronyd[798]: Loaded seccomp filter (level 2)
Jan 22 07:49:15 np0005592159 systemd[1]: Started NTP client/server.
Jan 22 07:49:15 np0005592159 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 22 07:49:15 np0005592159 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 22 07:49:15 np0005592159 systemd-logind[787]: New seat seat0.
Jan 22 07:49:15 np0005592159 systemd[1]: Started User Login Management.
Jan 22 07:49:15 np0005592159 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 22 07:49:15 np0005592159 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 22 07:49:15 np0005592159 kernel: Console: switching to colour frame buffer device 128x48
Jan 22 07:49:15 np0005592159 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 22 07:49:15 np0005592159 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 22 07:49:15 np0005592159 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 22 07:49:15 np0005592159 kernel: kvm_amd: TSC scaling supported
Jan 22 07:49:15 np0005592159 kernel: kvm_amd: Nested Virtualization enabled
Jan 22 07:49:15 np0005592159 kernel: kvm_amd: Nested Paging enabled
Jan 22 07:49:15 np0005592159 kernel: kvm_amd: LBR virtualization supported
Jan 22 07:49:15 np0005592159 iptables.init[780]: iptables: Applying firewall rules: [  OK  ]
Jan 22 07:49:15 np0005592159 systemd[1]: Finished IPv4 firewall with iptables.
Jan 22 07:49:15 np0005592159 cloud-init[836]: Cloud-init v. 24.4-8.el9 running 'init-local' at Thu, 22 Jan 2026 12:49:15 +0000. Up 6.40 seconds.
Jan 22 07:49:15 np0005592159 systemd[1]: run-cloud\x2dinit-tmp-tmpngi0hxr5.mount: Deactivated successfully.
Jan 22 07:49:15 np0005592159 systemd[1]: Starting Hostname Service...
Jan 22 07:49:16 np0005592159 systemd[1]: Started Hostname Service.
Jan 22 07:49:16 np0005592159 systemd-hostnamed[850]: Hostname set to <np0005592159.novalocal> (static)
Jan 22 07:49:16 np0005592159 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 22 07:49:16 np0005592159 systemd[1]: Reached target Preparation for Network.
Jan 22 07:49:16 np0005592159 systemd[1]: Starting Network Manager...
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.2719] NetworkManager (version 1.54.3-2.el9) is starting... (boot:24f4eb82-7451-47a9-a2ab-85f318c16b8a)
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.2726] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.2810] manager[0x56014830a000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.2861] hostname: hostname: using hostnamed
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.2861] hostname: static hostname changed from (none) to "np0005592159.novalocal"
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.2866] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.2975] manager[0x56014830a000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.2976] manager[0x56014830a000]: rfkill: WWAN hardware radio set enabled
Jan 22 07:49:16 np0005592159 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3031] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3031] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3032] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3032] manager: Networking is enabled by state file
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3034] settings: Loaded settings plugin: keyfile (internal)
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3045] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3068] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3080] dhcp: init: Using DHCP client 'internal'
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3083] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3098] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3105] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3115] device (lo): Activation: starting connection 'lo' (4169075c-72f8-4434-940a-1a390ca696d3)
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3126] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3129] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3175] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3179] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3181] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3184] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3186] device (eth0): carrier: link connected
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3189] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3195] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3201] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3205] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3206] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3210] manager: NetworkManager state is now CONNECTING
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3211] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3219] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3222] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:49:16 np0005592159 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 07:49:16 np0005592159 systemd[1]: Started Network Manager.
Jan 22 07:49:16 np0005592159 systemd[1]: Reached target Network.
Jan 22 07:49:16 np0005592159 systemd[1]: Starting Network Manager Wait Online...
Jan 22 07:49:16 np0005592159 systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 22 07:49:16 np0005592159 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3494] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3497] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.3504] device (lo): Activation: successful, device activated.
Jan 22 07:49:16 np0005592159 systemd[1]: Started GSSAPI Proxy Daemon.
Jan 22 07:49:16 np0005592159 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 22 07:49:16 np0005592159 systemd[1]: Reached target NFS client services.
Jan 22 07:49:16 np0005592159 systemd[1]: Reached target Preparation for Remote File Systems.
Jan 22 07:49:16 np0005592159 systemd[1]: Reached target Remote File Systems.
Jan 22 07:49:16 np0005592159 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.5861] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.5870] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.5891] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.5933] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.5935] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.5938] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.5940] device (eth0): Activation: successful, device activated.
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.5945] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 07:49:16 np0005592159 NetworkManager[854]: <info>  [1769086156.5947] manager: startup complete
Jan 22 07:49:16 np0005592159 systemd[1]: Finished Network Manager Wait Online.
Jan 22 07:49:16 np0005592159 systemd[1]: Starting Cloud-init: Network Stage...
Jan 22 07:49:16 np0005592159 cloud-init[918]: Cloud-init v. 24.4-8.el9 running 'init' at Thu, 22 Jan 2026 12:49:16 +0000. Up 7.69 seconds.
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: |  eth0  | True |         38.102.83.5          | 255.255.255.0 | global | fa:16:3e:9d:96:b7 |
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: |  eth0  | True | fe80::f816:3eff:fe9d:96b7/64 |       .       |  link  | fa:16:3e:9d:96:b7 |
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 22 07:49:16 np0005592159 cloud-init[918]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 22 07:49:17 np0005592159 cloud-init[918]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 22 07:49:18 np0005592159 cloud-init[918]: Generating public/private rsa key pair.
Jan 22 07:49:18 np0005592159 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 22 07:49:18 np0005592159 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 22 07:49:18 np0005592159 cloud-init[918]: The key fingerprint is:
Jan 22 07:49:18 np0005592159 cloud-init[918]: SHA256:ZrAkF9Xrv+nsA28p+s/bLd5i3L5ajk6r69DLcZjH8XE root@np0005592159.novalocal
Jan 22 07:49:18 np0005592159 cloud-init[918]: The key's randomart image is:
Jan 22 07:49:18 np0005592159 cloud-init[918]: +---[RSA 3072]----+
Jan 22 07:49:18 np0005592159 cloud-init[918]: |      ....       |
Jan 22 07:49:18 np0005592159 cloud-init[918]: |       .  .      |
Jan 22 07:49:18 np0005592159 cloud-init[918]: |    . +    .     |
Jan 22 07:49:18 np0005592159 cloud-init[918]: |     + o  .      |
Jan 22 07:49:18 np0005592159 cloud-init[918]: |      . S.   . .E|
Jan 22 07:49:18 np0005592159 cloud-init[918]: |       o  + + o o|
Jan 22 07:49:18 np0005592159 cloud-init[918]: |         . O B + |
Jan 22 07:49:18 np0005592159 cloud-init[918]: |          +o%oXo.|
Jan 22 07:49:18 np0005592159 cloud-init[918]: |        .oo%&@+*=|
Jan 22 07:49:18 np0005592159 cloud-init[918]: +----[SHA256]-----+
Jan 22 07:49:18 np0005592159 cloud-init[918]: Generating public/private ecdsa key pair.
Jan 22 07:49:18 np0005592159 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 22 07:49:18 np0005592159 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 22 07:49:18 np0005592159 cloud-init[918]: The key fingerprint is:
Jan 22 07:49:18 np0005592159 cloud-init[918]: SHA256:N07ntx9a6ee1wXNKZCLxHO+EmOEsTiu/ut92zp8Te3Q root@np0005592159.novalocal
Jan 22 07:49:18 np0005592159 cloud-init[918]: The key's randomart image is:
Jan 22 07:49:18 np0005592159 cloud-init[918]: +---[ECDSA 256]---+
Jan 22 07:49:18 np0005592159 cloud-init[918]: |                 |
Jan 22 07:49:18 np0005592159 cloud-init[918]: |                 |
Jan 22 07:49:18 np0005592159 cloud-init[918]: |          o .    |
Jan 22 07:49:18 np0005592159 cloud-init[918]: |         o B +   |
Jan 22 07:49:18 np0005592159 cloud-init[918]: |        S X * =  |
Jan 22 07:49:18 np0005592159 cloud-init[918]: |       o * = *o E|
Jan 22 07:49:18 np0005592159 cloud-init[918]: |      . o . . +@+|
Jan 22 07:49:18 np0005592159 cloud-init[918]: |       o ....o*+X|
Jan 22 07:49:18 np0005592159 cloud-init[918]: |      o++o.ooo=B+|
Jan 22 07:49:18 np0005592159 cloud-init[918]: +----[SHA256]-----+
Jan 22 07:49:18 np0005592159 cloud-init[918]: Generating public/private ed25519 key pair.
Jan 22 07:49:18 np0005592159 cloud-init[918]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 22 07:49:18 np0005592159 cloud-init[918]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 22 07:49:18 np0005592159 cloud-init[918]: The key fingerprint is:
Jan 22 07:49:18 np0005592159 cloud-init[918]: SHA256:1FUgzkEXvMlB1Efuil9gJ1N6+xIx3oHGUMLDz4R4tWE root@np0005592159.novalocal
Jan 22 07:49:18 np0005592159 cloud-init[918]: The key's randomart image is:
Jan 22 07:49:18 np0005592159 cloud-init[918]: +--[ED25519 256]--+
Jan 22 07:49:18 np0005592159 cloud-init[918]: |         .B=@E...|
Jan 22 07:49:18 np0005592159 cloud-init[918]: |         = %=.+..|
Jan 22 07:49:18 np0005592159 cloud-init[918]: |        . =.B=..o|
Jan 22 07:49:18 np0005592159 cloud-init[918]: |       .    +*o= |
Jan 22 07:49:18 np0005592159 cloud-init[918]: |        S   ..*+=|
Jan 22 07:49:18 np0005592159 cloud-init[918]: |             ooBo|
Jan 22 07:49:18 np0005592159 cloud-init[918]: |            . .o.|
Jan 22 07:49:18 np0005592159 cloud-init[918]: |             ....|
Jan 22 07:49:18 np0005592159 cloud-init[918]: |              ...|
Jan 22 07:49:18 np0005592159 cloud-init[918]: +----[SHA256]-----+
Jan 22 07:49:18 np0005592159 systemd[1]: Finished Cloud-init: Network Stage.
Jan 22 07:49:18 np0005592159 systemd[1]: Reached target Cloud-config availability.
Jan 22 07:49:18 np0005592159 systemd[1]: Reached target Network is Online.
Jan 22 07:49:18 np0005592159 systemd[1]: Starting Cloud-init: Config Stage...
Jan 22 07:49:18 np0005592159 systemd[1]: Starting Crash recovery kernel arming...
Jan 22 07:49:18 np0005592159 systemd[1]: Starting Notify NFS peers of a restart...
Jan 22 07:49:18 np0005592159 systemd[1]: Starting System Logging Service...
Jan 22 07:49:18 np0005592159 sm-notify[1001]: Version 2.5.4 starting
Jan 22 07:49:18 np0005592159 systemd[1]: Starting OpenSSH server daemon...
Jan 22 07:49:18 np0005592159 systemd[1]: Starting Permit User Sessions...
Jan 22 07:49:18 np0005592159 systemd[1]: Started Notify NFS peers of a restart.
Jan 22 07:49:18 np0005592159 systemd[1]: Started OpenSSH server daemon.
Jan 22 07:49:18 np0005592159 systemd[1]: Finished Permit User Sessions.
Jan 22 07:49:18 np0005592159 systemd[1]: Started Command Scheduler.
Jan 22 07:49:18 np0005592159 systemd[1]: Started Getty on tty1.
Jan 22 07:49:18 np0005592159 systemd[1]: Started Serial Getty on ttyS0.
Jan 22 07:49:18 np0005592159 systemd[1]: Reached target Login Prompts.
Jan 22 07:49:18 np0005592159 systemd[1]: Started System Logging Service.
Jan 22 07:49:18 np0005592159 rsyslogd[1002]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1002" x-info="https://www.rsyslog.com"] start
Jan 22 07:49:18 np0005592159 rsyslogd[1002]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 22 07:49:18 np0005592159 systemd[1]: Reached target Multi-User System.
Jan 22 07:49:18 np0005592159 systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 22 07:49:18 np0005592159 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 22 07:49:18 np0005592159 systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 22 07:49:18 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 07:49:18 np0005592159 kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Jan 22 07:49:18 np0005592159 kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 22 07:49:18 np0005592159 cloud-init[1129]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Thu, 22 Jan 2026 12:49:18 +0000. Up 9.51 seconds.
Jan 22 07:49:18 np0005592159 systemd[1]: Finished Cloud-init: Config Stage.
Jan 22 07:49:18 np0005592159 systemd[1]: Starting Cloud-init: Final Stage...
Jan 22 07:49:19 np0005592159 dracut[1264]: dracut-057-102.git20250818.el9
Jan 22 07:49:19 np0005592159 cloud-init[1272]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Thu, 22 Jan 2026 12:49:19 +0000. Up 9.87 seconds.
Jan 22 07:49:19 np0005592159 cloud-init[1282]: #############################################################
Jan 22 07:49:19 np0005592159 cloud-init[1283]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 22 07:49:19 np0005592159 cloud-init[1285]: 256 SHA256:N07ntx9a6ee1wXNKZCLxHO+EmOEsTiu/ut92zp8Te3Q root@np0005592159.novalocal (ECDSA)
Jan 22 07:49:19 np0005592159 cloud-init[1287]: 256 SHA256:1FUgzkEXvMlB1Efuil9gJ1N6+xIx3oHGUMLDz4R4tWE root@np0005592159.novalocal (ED25519)
Jan 22 07:49:19 np0005592159 cloud-init[1289]: 3072 SHA256:ZrAkF9Xrv+nsA28p+s/bLd5i3L5ajk6r69DLcZjH8XE root@np0005592159.novalocal (RSA)
Jan 22 07:49:19 np0005592159 cloud-init[1290]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 22 07:49:19 np0005592159 cloud-init[1291]: #############################################################
Jan 22 07:49:19 np0005592159 cloud-init[1272]: Cloud-init v. 24.4-8.el9 finished at Thu, 22 Jan 2026 12:49:19 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.08 seconds
Jan 22 07:49:19 np0005592159 dracut[1266]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 22 07:49:19 np0005592159 systemd[1]: Finished Cloud-init: Final Stage.
Jan 22 07:49:19 np0005592159 systemd[1]: Reached target Cloud-init target.
Jan 22 07:49:19 np0005592159 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 22 07:49:19 np0005592159 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 22 07:49:19 np0005592159 dracut[1266]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 22 07:49:19 np0005592159 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 22 07:49:19 np0005592159 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 22 07:49:19 np0005592159 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 22 07:49:19 np0005592159 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 22 07:49:19 np0005592159 dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 22 07:49:19 np0005592159 dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 22 07:49:19 np0005592159 dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: memstrack is not available
Jan 22 07:49:20 np0005592159 dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 22 07:49:20 np0005592159 dracut[1266]: memstrack is not available
Jan 22 07:49:20 np0005592159 dracut[1266]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 22 07:49:21 np0005592159 dracut[1266]: *** Including module: systemd ***
Jan 22 07:49:21 np0005592159 dracut[1266]: *** Including module: fips ***
Jan 22 07:49:21 np0005592159 chronyd[798]: Selected source 198.181.199.84 (2.centos.pool.ntp.org)
Jan 22 07:49:21 np0005592159 chronyd[798]: System clock TAI offset set to 37 seconds
Jan 22 07:49:21 np0005592159 dracut[1266]: *** Including module: systemd-initrd ***
Jan 22 07:49:21 np0005592159 dracut[1266]: *** Including module: i18n ***
Jan 22 07:49:21 np0005592159 dracut[1266]: *** Including module: drm ***
Jan 22 07:49:22 np0005592159 dracut[1266]: *** Including module: prefixdevname ***
Jan 22 07:49:22 np0005592159 dracut[1266]: *** Including module: kernel-modules ***
Jan 22 07:49:22 np0005592159 kernel: block vda: the capability attribute has been deprecated.
Jan 22 07:49:23 np0005592159 dracut[1266]: *** Including module: kernel-modules-extra ***
Jan 22 07:49:23 np0005592159 dracut[1266]: *** Including module: qemu ***
Jan 22 07:49:23 np0005592159 dracut[1266]: *** Including module: fstab-sys ***
Jan 22 07:49:23 np0005592159 dracut[1266]: *** Including module: rootfs-block ***
Jan 22 07:49:23 np0005592159 dracut[1266]: *** Including module: terminfo ***
Jan 22 07:49:23 np0005592159 dracut[1266]: *** Including module: udev-rules ***
Jan 22 07:49:23 np0005592159 dracut[1266]: Skipping udev rule: 91-permissions.rules
Jan 22 07:49:23 np0005592159 dracut[1266]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 22 07:49:24 np0005592159 dracut[1266]: *** Including module: virtiofs ***
Jan 22 07:49:24 np0005592159 dracut[1266]: *** Including module: dracut-systemd ***
Jan 22 07:49:24 np0005592159 dracut[1266]: *** Including module: usrmount ***
Jan 22 07:49:24 np0005592159 dracut[1266]: *** Including module: base ***
Jan 22 07:49:24 np0005592159 dracut[1266]: *** Including module: fs-lib ***
Jan 22 07:49:24 np0005592159 dracut[1266]: *** Including module: kdumpbase ***
Jan 22 07:49:24 np0005592159 dracut[1266]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 22 07:49:24 np0005592159 dracut[1266]:  microcode_ctl module: mangling fw_dir
Jan 22 07:49:24 np0005592159 dracut[1266]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 22 07:49:24 np0005592159 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: configuration "intel" is ignored
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 22 07:49:25 np0005592159 irqbalance[785]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 22 07:49:25 np0005592159 irqbalance[785]: IRQ 25 affinity is now unmanaged
Jan 22 07:49:25 np0005592159 irqbalance[785]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 22 07:49:25 np0005592159 irqbalance[785]: IRQ 31 affinity is now unmanaged
Jan 22 07:49:25 np0005592159 irqbalance[785]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 22 07:49:25 np0005592159 irqbalance[785]: IRQ 28 affinity is now unmanaged
Jan 22 07:49:25 np0005592159 irqbalance[785]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 22 07:49:25 np0005592159 irqbalance[785]: IRQ 32 affinity is now unmanaged
Jan 22 07:49:25 np0005592159 irqbalance[785]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 22 07:49:25 np0005592159 irqbalance[785]: IRQ 30 affinity is now unmanaged
Jan 22 07:49:25 np0005592159 irqbalance[785]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 22 07:49:25 np0005592159 irqbalance[785]: IRQ 29 affinity is now unmanaged
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 22 07:49:25 np0005592159 dracut[1266]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 22 07:49:25 np0005592159 dracut[1266]: *** Including module: openssl ***
Jan 22 07:49:25 np0005592159 dracut[1266]: *** Including module: shutdown ***
Jan 22 07:49:25 np0005592159 dracut[1266]: *** Including module: squash ***
Jan 22 07:49:25 np0005592159 dracut[1266]: *** Including modules done ***
Jan 22 07:49:25 np0005592159 dracut[1266]: *** Installing kernel module dependencies ***
Jan 22 07:49:26 np0005592159 dracut[1266]: *** Installing kernel module dependencies done ***
Jan 22 07:49:26 np0005592159 dracut[1266]: *** Resolving executable dependencies ***
Jan 22 07:49:26 np0005592159 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 07:49:28 np0005592159 dracut[1266]: *** Resolving executable dependencies done ***
Jan 22 07:49:28 np0005592159 dracut[1266]: *** Generating early-microcode cpio image ***
Jan 22 07:49:28 np0005592159 dracut[1266]: *** Store current command line parameters ***
Jan 22 07:49:28 np0005592159 dracut[1266]: Stored kernel commandline:
Jan 22 07:49:28 np0005592159 dracut[1266]: No dracut internal kernel commandline stored in the initramfs
Jan 22 07:49:28 np0005592159 dracut[1266]: *** Install squash loader ***
Jan 22 07:49:29 np0005592159 dracut[1266]: *** Squashing the files inside the initramfs ***
Jan 22 07:49:30 np0005592159 dracut[1266]: *** Squashing the files inside the initramfs done ***
Jan 22 07:49:30 np0005592159 dracut[1266]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 22 07:49:30 np0005592159 dracut[1266]: *** Hardlinking files ***
Jan 22 07:49:30 np0005592159 dracut[1266]: *** Hardlinking files done ***
Jan 22 07:49:31 np0005592159 dracut[1266]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 22 07:49:31 np0005592159 kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Jan 22 07:49:31 np0005592159 kdumpctl[1015]: kdump: Starting kdump: [OK]
Jan 22 07:49:31 np0005592159 systemd[1]: Finished Crash recovery kernel arming.
Jan 22 07:49:31 np0005592159 systemd[1]: Startup finished in 1.753s (kernel) + 2.827s (initrd) + 17.752s (userspace) = 22.334s.
Jan 22 07:49:46 np0005592159 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 07:50:30 np0005592159 systemd[1]: Created slice User Slice of UID 1000.
Jan 22 07:50:30 np0005592159 systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 22 07:50:30 np0005592159 systemd-logind[787]: New session 1 of user zuul.
Jan 22 07:50:30 np0005592159 systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 22 07:50:30 np0005592159 systemd[1]: Starting User Manager for UID 1000...
Jan 22 07:50:30 np0005592159 systemd[4305]: Queued start job for default target Main User Target.
Jan 22 07:50:30 np0005592159 systemd[4305]: Created slice User Application Slice.
Jan 22 07:50:30 np0005592159 systemd[4305]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 22 07:50:30 np0005592159 systemd[4305]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 07:50:30 np0005592159 systemd[4305]: Reached target Paths.
Jan 22 07:50:30 np0005592159 systemd[4305]: Reached target Timers.
Jan 22 07:50:30 np0005592159 systemd[4305]: Starting D-Bus User Message Bus Socket...
Jan 22 07:50:30 np0005592159 systemd[4305]: Starting Create User's Volatile Files and Directories...
Jan 22 07:50:30 np0005592159 systemd[4305]: Listening on D-Bus User Message Bus Socket.
Jan 22 07:50:30 np0005592159 systemd[4305]: Reached target Sockets.
Jan 22 07:50:30 np0005592159 systemd[4305]: Finished Create User's Volatile Files and Directories.
Jan 22 07:50:30 np0005592159 systemd[4305]: Reached target Basic System.
Jan 22 07:50:30 np0005592159 systemd[4305]: Reached target Main User Target.
Jan 22 07:50:30 np0005592159 systemd[4305]: Startup finished in 164ms.
Jan 22 07:50:30 np0005592159 systemd[1]: Started User Manager for UID 1000.
Jan 22 07:50:30 np0005592159 systemd[1]: Started Session 1 of User zuul.
Jan 22 07:50:31 np0005592159 python3[4387]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 07:50:35 np0005592159 python3[4417]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 07:50:41 np0005592159 python3[4475]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 07:50:42 np0005592159 python3[4515]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 22 07:50:44 np0005592159 python3[4541]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1DCoRB3r0Iy6aGg4LRzpWVb+uDCW+ivahM6mnwYTzs7NyJlgPrnZ6PV7GhjThi3qMi3wdL9+LpBaBPuOhI+k1w3f1FS+zKP3/xb59Ck+AhF8LIp3InS3sgWlvIGvXYvlwuN3aBMHp/hbvFOtbZFxgXhvIlVsk+m1K/J/50vtBBzyri7EjoTWDvY18FZoapjDeqss1t7AvCXVAcsVOfZsyssdWALG/AlGcmeZ9kZ/yza1tS0t7avldh0ZazNkLg/5jp3HQrTFLiETLQx8tBjdEj0Pme6UqjG17uVJkEVl4g3FLGiT4krCLRjW0sA3E3rd5e1m4tBIoSSqoqN2E+V9ctp/6T9Vpe3OcZdgKBUE9yz4tlHgQLxksFY2SiXEQYiWTctsRY30EsMJk2Qg65Fyp/ts6u4u66Uo27jNRB+ZD/vnAY4IKu94a2+6uIW/9oShh4f1cWrBlFzxXaUBj4KHar7HFljsOCavs7NCPccp7JoW8FoXONrfM+rhSgDbeDGE= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:50:45 np0005592159 python3[4565]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:45 np0005592159 python3[4664]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:50:46 np0005592159 python3[4735]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086245.5038712-253-258230427090939/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=09ef681cfe834983ad1540236f6f180d_id_rsa follow=False checksum=9eec2026f94d681755d58aa430eaf5c6b319017b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:46 np0005592159 python3[4858]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:50:47 np0005592159 python3[4929]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086246.4849834-308-233483319450064/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=09ef681cfe834983ad1540236f6f180d_id_rsa.pub follow=False checksum=f8a39b98331ab3302b65dacd0b8176268aaf7e5b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:49 np0005592159 python3[4977]: ansible-ping Invoked with data=pong
Jan 22 07:50:50 np0005592159 python3[5001]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 07:50:52 np0005592159 python3[5059]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 22 07:50:53 np0005592159 python3[5091]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:54 np0005592159 python3[5115]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:54 np0005592159 python3[5139]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:54 np0005592159 python3[5163]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:55 np0005592159 python3[5187]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:55 np0005592159 python3[5211]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:57 np0005592159 python3[5237]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:58 np0005592159 python3[5315]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:50:58 np0005592159 python3[5388]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086257.5701644-34-132735642924106/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:50:59 np0005592159 python3[5436]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:50:59 np0005592159 python3[5460]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:50:59 np0005592159 python3[5484]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:00 np0005592159 python3[5508]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:00 np0005592159 python3[5532]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:00 np0005592159 python3[5556]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:00 np0005592159 python3[5580]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:01 np0005592159 python3[5604]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:01 np0005592159 python3[5628]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:01 np0005592159 python3[5652]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:02 np0005592159 python3[5676]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:02 np0005592159 python3[5700]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:02 np0005592159 python3[5724]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:02 np0005592159 python3[5748]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:03 np0005592159 python3[5772]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:03 np0005592159 python3[5796]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:03 np0005592159 python3[5820]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:04 np0005592159 python3[5844]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:04 np0005592159 python3[5868]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:04 np0005592159 python3[5892]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:04 np0005592159 python3[5916]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:05 np0005592159 python3[5940]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:05 np0005592159 python3[5964]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:05 np0005592159 python3[5988]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:06 np0005592159 python3[6012]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:06 np0005592159 python3[6036]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 07:51:08 np0005592159 python3[6062]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 07:51:08 np0005592159 systemd[1]: Starting Time & Date Service...
Jan 22 07:51:08 np0005592159 systemd[1]: Started Time & Date Service.
Jan 22 07:51:09 np0005592159 systemd-timedated[6064]: Changed time zone to 'UTC' (UTC).
Jan 22 07:51:09 np0005592159 python3[6093]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:10 np0005592159 python3[6169]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:51:10 np0005592159 python3[6240]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1769086269.735884-254-188511559888107/source _original_basename=tmplj16a1bi follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:11 np0005592159 python3[6340]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:51:11 np0005592159 python3[6411]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769086270.8039196-305-15278230150983/source _original_basename=tmp7ik5k7i8 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:12 np0005592159 python3[6513]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:51:13 np0005592159 python3[6586]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1769086272.489041-384-194965756008516/source _original_basename=tmpvj899g3v follow=False checksum=19d309ebea5b58181725fc1dc4cea95ea4d18865 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:13 np0005592159 python3[6634]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:51:14 np0005592159 python3[6660]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:51:14 np0005592159 python3[6740]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:51:15 np0005592159 python3[6813]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086274.4080887-454-224720339979334/source _original_basename=tmpzkadzulz follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:16 np0005592159 python3[6866]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-37d2-1cc7-00000000001f-1-compute2 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:51:16 np0005592159 python3[6893]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-37d2-1cc7-000000000020-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 22 07:51:18 np0005592159 python3[6922]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:51:39 np0005592159 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 07:51:43 np0005592159 python3[6952]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:52:43 np0005592159 systemd-logind[787]: Session 1 logged out. Waiting for processes to exit.
Jan 22 07:52:45 np0005592159 systemd[4305]: Starting Mark boot as successful...
Jan 22 07:52:45 np0005592159 systemd[4305]: Finished Mark boot as successful.
Jan 22 07:53:20 np0005592159 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 22 07:53:20 np0005592159 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 22 07:53:20 np0005592159 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 22 07:53:20 np0005592159 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 22 07:53:20 np0005592159 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 22 07:53:20 np0005592159 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 22 07:53:20 np0005592159 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 22 07:53:20 np0005592159 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 22 07:53:20 np0005592159 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 22 07:53:20 np0005592159 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 22 07:53:20 np0005592159 NetworkManager[854]: <info>  [1769086400.6576] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 07:53:20 np0005592159 systemd-udevd[6957]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 07:53:20 np0005592159 NetworkManager[854]: <info>  [1769086400.6786] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 07:53:20 np0005592159 NetworkManager[854]: <info>  [1769086400.6831] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 22 07:53:20 np0005592159 NetworkManager[854]: <info>  [1769086400.6836] device (eth1): carrier: link connected
Jan 22 07:53:20 np0005592159 NetworkManager[854]: <info>  [1769086400.6839] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 22 07:53:20 np0005592159 NetworkManager[854]: <info>  [1769086400.6849] policy: auto-activating connection 'Wired connection 1' (128e382a-734b-354e-b29c-4c5a72c08cb7)
Jan 22 07:53:20 np0005592159 NetworkManager[854]: <info>  [1769086400.6856] device (eth1): Activation: starting connection 'Wired connection 1' (128e382a-734b-354e-b29c-4c5a72c08cb7)
Jan 22 07:53:20 np0005592159 NetworkManager[854]: <info>  [1769086400.6857] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 07:53:20 np0005592159 NetworkManager[854]: <info>  [1769086400.6861] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 07:53:20 np0005592159 NetworkManager[854]: <info>  [1769086400.6867] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 07:53:20 np0005592159 NetworkManager[854]: <info>  [1769086400.6875] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:53:21 np0005592159 systemd-logind[787]: New session 3 of user zuul.
Jan 22 07:53:21 np0005592159 systemd[1]: Started Session 3 of User zuul.
Jan 22 07:53:21 np0005592159 python3[6987]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-97dc-dff7-0000000001f6-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:53:32 np0005592159 python3[7067]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:53:32 np0005592159 python3[7140]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769086411.7768462-206-203881184855265/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=2700db3a9722b22b06523fa143bc24bf7058877a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:53:33 np0005592159 python3[7190]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 07:53:33 np0005592159 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 22 07:53:33 np0005592159 systemd[1]: Stopped Network Manager Wait Online.
Jan 22 07:53:33 np0005592159 systemd[1]: Stopping Network Manager Wait Online...
Jan 22 07:53:33 np0005592159 systemd[1]: Stopping Network Manager...
Jan 22 07:53:33 np0005592159 NetworkManager[854]: <info>  [1769086413.1652] caught SIGTERM, shutting down normally.
Jan 22 07:53:33 np0005592159 NetworkManager[854]: <info>  [1769086413.1673] dhcp4 (eth0): canceled DHCP transaction
Jan 22 07:53:33 np0005592159 NetworkManager[854]: <info>  [1769086413.1674] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:53:33 np0005592159 NetworkManager[854]: <info>  [1769086413.1674] dhcp4 (eth0): state changed no lease
Jan 22 07:53:33 np0005592159 NetworkManager[854]: <info>  [1769086413.1677] manager: NetworkManager state is now CONNECTING
Jan 22 07:53:33 np0005592159 NetworkManager[854]: <info>  [1769086413.1814] dhcp4 (eth1): canceled DHCP transaction
Jan 22 07:53:33 np0005592159 NetworkManager[854]: <info>  [1769086413.1814] dhcp4 (eth1): state changed no lease
Jan 22 07:53:33 np0005592159 NetworkManager[854]: <info>  [1769086413.1891] exiting (success)
Jan 22 07:53:33 np0005592159 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 07:53:33 np0005592159 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 22 07:53:33 np0005592159 systemd[1]: Stopped Network Manager.
Jan 22 07:53:33 np0005592159 systemd[1]: NetworkManager.service: Consumed 1.780s CPU time, 10.0M memory peak.
Jan 22 07:53:33 np0005592159 systemd[1]: Starting Network Manager...
Jan 22 07:53:33 np0005592159 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.2591] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:24f4eb82-7451-47a9-a2ab-85f318c16b8a)
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.2595] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.2665] manager[0x563ab64fb000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 07:53:33 np0005592159 systemd[1]: Starting Hostname Service...
Jan 22 07:53:33 np0005592159 systemd[1]: Started Hostname Service.
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3724] hostname: hostname: using hostnamed
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3725] hostname: static hostname changed from (none) to "np0005592159.novalocal"
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3732] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3739] manager[0x563ab64fb000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3740] manager[0x563ab64fb000]: rfkill: WWAN hardware radio set enabled
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3788] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3788] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3789] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3791] manager: Networking is enabled by state file
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3794] settings: Loaded settings plugin: keyfile (internal)
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3800] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3845] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3861] dhcp: init: Using DHCP client 'internal'
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3866] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3875] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3883] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3896] device (lo): Activation: starting connection 'lo' (4169075c-72f8-4434-940a-1a390ca696d3)
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3907] device (eth0): carrier: link connected
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3915] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3924] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3925] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3936] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3950] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3959] device (eth1): carrier: link connected
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3966] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3976] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (128e382a-734b-354e-b29c-4c5a72c08cb7) (indicated)
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3977] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3986] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.3998] device (eth1): Activation: starting connection 'Wired connection 1' (128e382a-734b-354e-b29c-4c5a72c08cb7)
Jan 22 07:53:33 np0005592159 systemd[1]: Started Network Manager.
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4005] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4011] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4016] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4019] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4022] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4028] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4032] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4036] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4041] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4051] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4064] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4080] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4086] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4111] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4118] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4126] device (lo): Activation: successful, device activated.
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4137] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4147] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 07:53:33 np0005592159 systemd[1]: Starting Network Manager Wait Online...
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4213] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4242] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4244] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4248] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4251] device (eth0): Activation: successful, device activated.
Jan 22 07:53:33 np0005592159 NetworkManager[7199]: <info>  [1769086413.4257] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 07:53:33 np0005592159 python3[7275]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-97dc-dff7-0000000000d3-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 07:53:43 np0005592159 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 07:54:03 np0005592159 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.2401] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 07:54:18 np0005592159 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 07:54:18 np0005592159 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.2785] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.2791] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.2810] device (eth1): Activation: successful, device activated.
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.2825] manager: startup complete
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.2828] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <warn>  [1769086458.2848] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.2866] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 22 07:54:18 np0005592159 systemd[1]: Finished Network Manager Wait Online.
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.2984] dhcp4 (eth1): canceled DHCP transaction
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.2985] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.2985] dhcp4 (eth1): state changed no lease
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.3005] policy: auto-activating connection 'ci-private-network' (dcaea49a-a5c5-5229-9667-55a0529b8fba)
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.3012] device (eth1): Activation: starting connection 'ci-private-network' (dcaea49a-a5c5-5229-9667-55a0529b8fba)
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.3013] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.3017] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.3027] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.3040] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.3085] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.3087] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 07:54:18 np0005592159 NetworkManager[7199]: <info>  [1769086458.3095] device (eth1): Activation: successful, device activated.
Jan 22 07:54:28 np0005592159 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 07:54:33 np0005592159 systemd[1]: session-3.scope: Deactivated successfully.
Jan 22 07:54:33 np0005592159 systemd[1]: session-3.scope: Consumed 1.840s CPU time.
Jan 22 07:54:33 np0005592159 systemd-logind[787]: Session 3 logged out. Waiting for processes to exit.
Jan 22 07:54:33 np0005592159 systemd-logind[787]: Removed session 3.
Jan 22 07:54:59 np0005592159 systemd-logind[787]: New session 4 of user zuul.
Jan 22 07:54:59 np0005592159 systemd[1]: Started Session 4 of User zuul.
Jan 22 07:55:00 np0005592159 python3[7387]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 07:55:00 np0005592159 python3[7460]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086499.966251-373-55631708650966/source _original_basename=tmpado48coe follow=False checksum=5e7e0974f47bfd675c68ead6f6109233c4c9d481 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 07:55:02 np0005592159 systemd[1]: session-4.scope: Deactivated successfully.
Jan 22 07:55:02 np0005592159 systemd-logind[787]: Session 4 logged out. Waiting for processes to exit.
Jan 22 07:55:02 np0005592159 systemd-logind[787]: Removed session 4.
Jan 22 07:55:45 np0005592159 systemd[4305]: Created slice User Background Tasks Slice.
Jan 22 07:55:45 np0005592159 systemd[4305]: Starting Cleanup of User's Temporary Files and Directories...
Jan 22 07:55:45 np0005592159 systemd[4305]: Finished Cleanup of User's Temporary Files and Directories.
Jan 22 08:00:10 np0005592159 systemd-logind[787]: New session 5 of user zuul.
Jan 22 08:00:10 np0005592159 systemd[1]: Started Session 5 of User zuul.
Jan 22 08:00:10 np0005592159 python3[7529]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-68e9-2a3f-000000000ca0-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:11 np0005592159 python3[7559]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:12 np0005592159 python3[7585]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:12 np0005592159 python3[7611]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:12 np0005592159 python3[7637]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:13 np0005592159 python3[7663]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:13 np0005592159 python3[7741]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:00:13 np0005592159 python3[7814]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086813.2488887-364-108910745133351/source _original_basename=tmpwmjwvnyv follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:00:14 np0005592159 python3[7864]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:00:14 np0005592159 systemd[1]: Reloading.
Jan 22 08:00:15 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:00:16 np0005592159 python3[7919]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 22 08:00:17 np0005592159 python3[7945]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:18 np0005592159 python3[7973]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:18 np0005592159 python3[8001]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:18 np0005592159 python3[8029]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:19 np0005592159 python3[8056]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-68e9-2a3f-000000000ca7-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:00:19 np0005592159 python3[8086]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 08:00:22 np0005592159 systemd[1]: session-5.scope: Deactivated successfully.
Jan 22 08:00:22 np0005592159 systemd[1]: session-5.scope: Consumed 4.584s CPU time.
Jan 22 08:00:22 np0005592159 systemd-logind[787]: Session 5 logged out. Waiting for processes to exit.
Jan 22 08:00:22 np0005592159 systemd-logind[787]: Removed session 5.
Jan 22 08:00:24 np0005592159 systemd-logind[787]: New session 6 of user zuul.
Jan 22 08:00:24 np0005592159 systemd[1]: Started Session 6 of User zuul.
Jan 22 08:00:25 np0005592159 python3[8120]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 08:00:31 np0005592159 setsebool[8159]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 22 08:00:31 np0005592159 setsebool[8159]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 22 08:00:45 np0005592159 kernel: SELinux:  Converting 383 SID table entries...
Jan 22 08:00:45 np0005592159 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:00:45 np0005592159 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:00:45 np0005592159 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:00:45 np0005592159 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:00:45 np0005592159 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:00:45 np0005592159 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:00:45 np0005592159 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:01:02 np0005592159 kernel: SELinux:  Converting 387 SID table entries...
Jan 22 08:01:02 np0005592159 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:01:02 np0005592159 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:01:02 np0005592159 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:01:02 np0005592159 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:01:02 np0005592159 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:01:02 np0005592159 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:01:02 np0005592159 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:01:21 np0005592159 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 22 08:01:21 np0005592159 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:01:21 np0005592159 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:01:21 np0005592159 systemd[1]: Reloading.
Jan 22 08:01:21 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:01:21 np0005592159 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:01:25 np0005592159 python3[10659]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-af35-cd98-00000000000c-1-compute2 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:01:26 np0005592159 kernel: evm: overlay not supported
Jan 22 08:01:26 np0005592159 systemd[4305]: Starting D-Bus User Message Bus...
Jan 22 08:01:26 np0005592159 dbus-broker-launch[11936]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 22 08:01:26 np0005592159 dbus-broker-launch[11936]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 22 08:01:26 np0005592159 systemd[4305]: Started D-Bus User Message Bus.
Jan 22 08:01:26 np0005592159 dbus-broker-lau[11936]: Ready
Jan 22 08:01:26 np0005592159 systemd[4305]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 22 08:01:26 np0005592159 systemd[4305]: Created slice Slice /user.
Jan 22 08:01:26 np0005592159 systemd[4305]: podman-11817.scope: unit configures an IP firewall, but not running as root.
Jan 22 08:01:26 np0005592159 systemd[4305]: (This warning is only shown for the first unit using IP firewalling.)
Jan 22 08:01:26 np0005592159 systemd[4305]: Started podman-11817.scope.
Jan 22 08:01:26 np0005592159 systemd[4305]: Started podman-pause-3b1c51bd.scope.
Jan 22 08:01:27 np0005592159 python3[12760]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.194:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.194:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:01:27 np0005592159 python3[12760]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 22 08:01:28 np0005592159 systemd[1]: session-6.scope: Deactivated successfully.
Jan 22 08:01:28 np0005592159 systemd[1]: session-6.scope: Consumed 47.874s CPU time.
Jan 22 08:01:28 np0005592159 systemd-logind[787]: Session 6 logged out. Waiting for processes to exit.
Jan 22 08:01:28 np0005592159 systemd-logind[787]: Removed session 6.
Jan 22 08:01:54 np0005592159 systemd-logind[787]: New session 7 of user zuul.
Jan 22 08:01:54 np0005592159 systemd[1]: Started Session 7 of User zuul.
Jan 22 08:01:54 np0005592159 python3[21468]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 08:01:55 np0005592159 python3[21711]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 08:01:55 np0005592159 irqbalance[785]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 22 08:01:55 np0005592159 irqbalance[785]: IRQ 27 affinity is now unmanaged
Jan 22 08:01:55 np0005592159 python3[22044]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005592159.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 22 08:01:59 np0005592159 python3[23191]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJXWzJINFux2Y3W71Rz6OTPUrCjH8iByostW8OdI2DuZKTtkp9FbD8EiNvlPjARok6n/DFn2L3T6ys0ILkIENxo= zuul@np0005592156.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 22 08:02:00 np0005592159 python3[23592]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:02:01 np0005592159 python3[23837]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769086920.377604-170-19221314951872/source _original_basename=tmpswz6jnnk follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:02:02 np0005592159 python3[24155]: ansible-ansible.builtin.hostname Invoked with name=compute-2 use=systemd
Jan 22 08:02:02 np0005592159 systemd[1]: Starting Hostname Service...
Jan 22 08:02:02 np0005592159 systemd[1]: Started Hostname Service.
Jan 22 08:02:02 np0005592159 systemd-hostnamed[24255]: Changed pretty hostname to 'compute-2'
Jan 22 08:02:02 np0005592159 systemd-hostnamed[24255]: Hostname set to <compute-2> (static)
Jan 22 08:02:02 np0005592159 NetworkManager[7199]: <info>  [1769086922.2909] hostname: static hostname changed from "np0005592159.novalocal" to "compute-2"
Jan 22 08:02:02 np0005592159 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 08:02:02 np0005592159 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 08:02:02 np0005592159 systemd[1]: session-7.scope: Deactivated successfully.
Jan 22 08:02:02 np0005592159 systemd[1]: session-7.scope: Consumed 2.323s CPU time.
Jan 22 08:02:02 np0005592159 systemd-logind[787]: Session 7 logged out. Waiting for processes to exit.
Jan 22 08:02:02 np0005592159 systemd-logind[787]: Removed session 7.
Jan 22 08:02:12 np0005592159 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 08:02:28 np0005592159 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:02:28 np0005592159 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:02:28 np0005592159 systemd[1]: man-db-cache-update.service: Consumed 1min 5.501s CPU time.
Jan 22 08:02:28 np0005592159 systemd[1]: run-r43094218693f467588d414b5e14fe722.service: Deactivated successfully.
Jan 22 08:02:32 np0005592159 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 08:04:25 np0005592159 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 22 08:04:26 np0005592159 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 22 08:04:26 np0005592159 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 22 08:04:26 np0005592159 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 22 08:07:01 np0005592159 systemd-logind[787]: New session 8 of user zuul.
Jan 22 08:07:01 np0005592159 systemd[1]: Started Session 8 of User zuul.
Jan 22 08:07:01 np0005592159 python3[30040]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:07:03 np0005592159 python3[30156]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:03 np0005592159 python3[30229]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1507885-34126-59156340687819/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:04 np0005592159 python3[30255]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:04 np0005592159 python3[30328]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1507885-34126-59156340687819/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:04 np0005592159 python3[30354]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:05 np0005592159 python3[30427]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1507885-34126-59156340687819/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:05 np0005592159 python3[30453]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:05 np0005592159 python3[30526]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1507885-34126-59156340687819/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:05 np0005592159 python3[30552]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:06 np0005592159 python3[30625]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1507885-34126-59156340687819/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:06 np0005592159 python3[30651]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:06 np0005592159 python3[30724]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1507885-34126-59156340687819/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:07 np0005592159 python3[30750]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:07:07 np0005592159 python3[30823]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1769087223.1507885-34126-59156340687819/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:07:20 np0005592159 python3[30871]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:12:20 np0005592159 systemd[1]: session-8.scope: Deactivated successfully.
Jan 22 08:12:20 np0005592159 systemd[1]: session-8.scope: Consumed 4.750s CPU time.
Jan 22 08:12:20 np0005592159 systemd-logind[787]: Session 8 logged out. Waiting for processes to exit.
Jan 22 08:12:20 np0005592159 systemd-logind[787]: Removed session 8.
Jan 22 08:21:58 np0005592159 systemd-logind[787]: New session 9 of user zuul.
Jan 22 08:21:58 np0005592159 systemd[1]: Started Session 9 of User zuul.
Jan 22 08:21:59 np0005592159 python3.9[31061]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:00 np0005592159 python3.9[31242]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:22:11 np0005592159 systemd-logind[787]: Session 9 logged out. Waiting for processes to exit.
Jan 22 08:22:11 np0005592159 systemd[1]: session-9.scope: Deactivated successfully.
Jan 22 08:22:11 np0005592159 systemd[1]: session-9.scope: Consumed 8.010s CPU time.
Jan 22 08:22:11 np0005592159 systemd-logind[787]: Removed session 9.
Jan 22 08:22:26 np0005592159 systemd-logind[787]: New session 10 of user zuul.
Jan 22 08:22:26 np0005592159 systemd[1]: Started Session 10 of User zuul.
Jan 22 08:22:27 np0005592159 python3.9[31459]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 22 08:22:29 np0005592159 python3.9[31633]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:30 np0005592159 python3.9[31785]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:22:31 np0005592159 python3.9[31938]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:22:32 np0005592159 python3.9[32090]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:22:32 np0005592159 python3.9[32242]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:22:33 np0005592159 python3.9[32365]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088152.4649065-180-93825057625088/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:22:34 np0005592159 python3.9[32517]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:35 np0005592159 python3.9[32673]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:22:36 np0005592159 python3.9[32825]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:22:37 np0005592159 python3.9[32975]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:22:42 np0005592159 python3.9[33228]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:22:42 np0005592159 python3.9[33378]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:44 np0005592159 python3.9[33533]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:22:45 np0005592159 python3.9[33691]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:22:46 np0005592159 python3.9[33775]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:23:42 np0005592159 systemd[1]: Reloading.
Jan 22 08:23:42 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:23:42 np0005592159 systemd[1]: Starting dnf makecache...
Jan 22 08:23:42 np0005592159 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 22 08:23:43 np0005592159 dnf[33988]: Failed determining last makecache time.
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-openstack-barbican-42b4c41831408a8e323 141 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 164 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-openstack-cinder-1c00d6490d88e436f26ef 176 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-python-stevedore-c4acc5639fd2329372142 154 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-python-cloudkitty-tests-tempest-2c80f8 154 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-os-refresh-config-9bfc52b5049be2d8de61 171 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 158 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-python-designate-tests-tempest-347fdbc 162 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 systemd[1]: Reloading.
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-openstack-glance-1fd12c29b339f30fe823e 152 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 171 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-openstack-manila-3c01b7181572c95dac462 155 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-python-whitebox-neutron-tests-tempest-  94 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-openstack-octavia-ba397f07a7331190208c 115 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-openstack-watcher-c014f81a8647287f6dcc 150 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-ansible-config_template-5ccaa22121a7ff 152 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 151 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-openstack-swift-dc98a8463506ac520c469a 166 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-python-tempestconf-8515371b7cceebd4282 104 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 dnf[33988]: delorean-openstack-heat-ui-013accbfd179753bc3f0 101 kB/s | 3.0 kB     00:00
Jan 22 08:23:43 np0005592159 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 22 08:23:43 np0005592159 systemd[1]: Reloading.
Jan 22 08:23:43 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:23:43 np0005592159 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 22 08:23:43 np0005592159 dnf[33988]: CentOS Stream 9 - BaseOS                         29 kB/s | 6.7 kB     00:00
Jan 22 08:23:44 np0005592159 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 22 08:23:44 np0005592159 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 22 08:23:44 np0005592159 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 22 08:23:44 np0005592159 dnf[33988]: CentOS Stream 9 - AppStream                      30 kB/s | 6.8 kB     00:00
Jan 22 08:23:44 np0005592159 dnf[33988]: CentOS Stream 9 - CRB                            56 kB/s | 6.6 kB     00:00
Jan 22 08:23:44 np0005592159 dnf[33988]: CentOS Stream 9 - Extras packages                31 kB/s | 7.3 kB     00:00
Jan 22 08:23:44 np0005592159 dnf[33988]: dlrn-antelope-testing                           113 kB/s | 3.0 kB     00:00
Jan 22 08:23:44 np0005592159 dnf[33988]: dlrn-antelope-build-deps                        106 kB/s | 3.0 kB     00:00
Jan 22 08:23:44 np0005592159 dnf[33988]: centos9-rabbitmq                                 89 kB/s | 3.0 kB     00:00
Jan 22 08:23:44 np0005592159 dnf[33988]: centos9-storage                                 103 kB/s | 3.0 kB     00:00
Jan 22 08:23:44 np0005592159 dnf[33988]: centos9-opstools                                106 kB/s | 3.0 kB     00:00
Jan 22 08:23:44 np0005592159 dnf[33988]: NFV SIG OpenvSwitch                             122 kB/s | 3.0 kB     00:00
Jan 22 08:23:44 np0005592159 dnf[33988]: repo-setup-centos-appstream                     193 kB/s | 4.4 kB     00:00
Jan 22 08:23:45 np0005592159 dnf[33988]: repo-setup-centos-baseos                        157 kB/s | 3.9 kB     00:00
Jan 22 08:23:45 np0005592159 dnf[33988]: repo-setup-centos-highavailability              145 kB/s | 3.9 kB     00:00
Jan 22 08:23:45 np0005592159 dnf[33988]: repo-setup-centos-powertools                    171 kB/s | 4.3 kB     00:00
Jan 22 08:23:45 np0005592159 dnf[33988]: Extra Packages for Enterprise Linux 9 - x86_64  208 kB/s |  25 kB     00:00
Jan 22 08:23:45 np0005592159 dnf[33988]: Metadata cache created.
Jan 22 08:23:46 np0005592159 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 22 08:23:46 np0005592159 systemd[1]: Finished dnf makecache.
Jan 22 08:23:46 np0005592159 systemd[1]: dnf-makecache.service: Consumed 1.983s CPU time.
Jan 22 08:24:55 np0005592159 kernel: SELinux:  Converting 2723 SID table entries...
Jan 22 08:24:55 np0005592159 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:24:55 np0005592159 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:24:55 np0005592159 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:24:55 np0005592159 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:24:55 np0005592159 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:24:55 np0005592159 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:24:55 np0005592159 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:24:56 np0005592159 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 22 08:24:56 np0005592159 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:24:56 np0005592159 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:24:56 np0005592159 systemd[1]: Reloading.
Jan 22 08:24:56 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:24:56 np0005592159 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:24:57 np0005592159 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:24:57 np0005592159 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:24:57 np0005592159 systemd[1]: man-db-cache-update.service: Consumed 1.069s CPU time.
Jan 22 08:24:57 np0005592159 systemd[1]: run-re6a8c645af0a4cf0be66481f23587e9d.service: Deactivated successfully.
Jan 22 08:24:57 np0005592159 python3.9[35371]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:24:59 np0005592159 python3.9[35652]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 22 08:25:00 np0005592159 python3.9[35804]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 22 08:25:05 np0005592159 python3.9[35957]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:25:10 np0005592159 python3.9[36109]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 22 08:25:11 np0005592159 python3.9[36261]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:25:12 np0005592159 python3.9[36413]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:25:13 np0005592159 python3.9[36536]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088311.9978292-669-254647316206721/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:25:14 np0005592159 python3.9[36688]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:25:15 np0005592159 python3.9[36840]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:16 np0005592159 python3.9[36993]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:25:17 np0005592159 python3.9[37145]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 22 08:25:17 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:25:17 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:25:18 np0005592159 python3.9[37299]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:25:20 np0005592159 python3.9[37457]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 08:25:21 np0005592159 python3.9[37617]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 22 08:25:21 np0005592159 python3.9[37770]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:25:22 np0005592159 python3.9[37928]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 22 08:25:23 np0005592159 python3.9[38080]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:25:29 np0005592159 python3.9[38234]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:25:30 np0005592159 python3.9[38386]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:25:30 np0005592159 python3.9[38509]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088329.702185-1026-74188786084060/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:25:32 np0005592159 python3.9[38661]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:25:32 np0005592159 systemd[1]: Starting Load Kernel Modules...
Jan 22 08:25:32 np0005592159 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 22 08:25:32 np0005592159 kernel: Bridge firewalling registered
Jan 22 08:25:32 np0005592159 systemd-modules-load[38665]: Inserted module 'br_netfilter'
Jan 22 08:25:32 np0005592159 systemd[1]: Finished Load Kernel Modules.
Jan 22 08:25:33 np0005592159 python3.9[38821]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:25:33 np0005592159 python3.9[38944]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088332.5190475-1095-28453236044137/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:25:34 np0005592159 python3.9[39096]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:25:37 np0005592159 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 22 08:25:38 np0005592159 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 22 08:25:38 np0005592159 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:25:38 np0005592159 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:25:38 np0005592159 systemd[1]: Reloading.
Jan 22 08:25:38 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:25:38 np0005592159 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:25:40 np0005592159 python3.9[41435]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:25:41 np0005592159 python3.9[42468]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 22 08:25:42 np0005592159 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:25:42 np0005592159 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:25:42 np0005592159 systemd[1]: man-db-cache-update.service: Consumed 4.921s CPU time.
Jan 22 08:25:42 np0005592159 systemd[1]: run-r5db5ed034bd64228832cc77fe1b394c9.service: Deactivated successfully.
Jan 22 08:25:42 np0005592159 python3.9[43110]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:25:43 np0005592159 python3.9[43264]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:43 np0005592159 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 08:25:43 np0005592159 systemd[1]: Starting Authorization Manager...
Jan 22 08:25:43 np0005592159 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 08:25:43 np0005592159 polkitd[43481]: Started polkitd version 0.117
Jan 22 08:25:43 np0005592159 systemd[1]: Started Authorization Manager.
Jan 22 08:25:45 np0005592159 python3.9[43651]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:25:45 np0005592159 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 22 08:25:45 np0005592159 systemd[1]: tuned.service: Deactivated successfully.
Jan 22 08:25:45 np0005592159 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 22 08:25:45 np0005592159 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 08:25:45 np0005592159 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 08:25:46 np0005592159 python3.9[43812]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 22 08:25:50 np0005592159 python3.9[43964]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:25:50 np0005592159 systemd[1]: Reloading.
Jan 22 08:25:50 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:25:51 np0005592159 python3.9[44154]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:25:51 np0005592159 systemd[1]: Reloading.
Jan 22 08:25:51 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:25:53 np0005592159 python3.9[44343]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:53 np0005592159 python3.9[44496]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:54 np0005592159 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 22 08:25:54 np0005592159 python3.9[44649]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:57 np0005592159 python3.9[44811]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:25:57 np0005592159 python3.9[44964]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:25:58 np0005592159 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 22 08:25:58 np0005592159 systemd[1]: Stopped Apply Kernel Variables.
Jan 22 08:25:58 np0005592159 systemd[1]: Stopping Apply Kernel Variables...
Jan 22 08:25:58 np0005592159 systemd[1]: Starting Apply Kernel Variables...
Jan 22 08:25:58 np0005592159 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 22 08:25:58 np0005592159 systemd[1]: Finished Apply Kernel Variables.
Jan 22 08:25:58 np0005592159 systemd[1]: session-10.scope: Deactivated successfully.
Jan 22 08:25:58 np0005592159 systemd[1]: session-10.scope: Consumed 2min 16.191s CPU time.
Jan 22 08:25:58 np0005592159 systemd-logind[787]: Session 10 logged out. Waiting for processes to exit.
Jan 22 08:25:58 np0005592159 systemd-logind[787]: Removed session 10.
Jan 22 08:26:03 np0005592159 systemd-logind[787]: New session 11 of user zuul.
Jan 22 08:26:03 np0005592159 systemd[1]: Started Session 11 of User zuul.
Jan 22 08:26:04 np0005592159 python3.9[45147]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:26:06 np0005592159 python3.9[45303]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 22 08:26:07 np0005592159 python3.9[45456]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:26:08 np0005592159 python3.9[45614]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 08:26:09 np0005592159 python3.9[45774]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:26:10 np0005592159 python3.9[45858]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 08:26:14 np0005592159 python3.9[46022]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:26:26 np0005592159 kernel: SELinux:  Converting 2736 SID table entries...
Jan 22 08:26:26 np0005592159 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:26:26 np0005592159 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:26:26 np0005592159 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:26:26 np0005592159 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:26:26 np0005592159 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:26:26 np0005592159 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:26:26 np0005592159 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:26:26 np0005592159 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 22 08:26:26 np0005592159 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 22 08:26:28 np0005592159 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:26:28 np0005592159 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:26:28 np0005592159 systemd[1]: Reloading.
Jan 22 08:26:28 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:26:28 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:26:28 np0005592159 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:26:29 np0005592159 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:26:29 np0005592159 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:26:29 np0005592159 systemd[1]: run-r663dc6f62e7b4476a1bec8fc650f28b6.service: Deactivated successfully.
Jan 22 08:26:33 np0005592159 python3.9[47121]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:26:33 np0005592159 systemd[1]: Reloading.
Jan 22 08:26:33 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:26:33 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:26:34 np0005592159 systemd[1]: Starting Open vSwitch Database Unit...
Jan 22 08:26:34 np0005592159 chown[47162]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 22 08:26:34 np0005592159 ovs-ctl[47167]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 22 08:26:34 np0005592159 ovs-ctl[47167]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 22 08:26:34 np0005592159 ovs-ctl[47167]: Starting ovsdb-server [  OK  ]
Jan 22 08:26:34 np0005592159 ovs-vsctl[47216]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 22 08:26:34 np0005592159 ovs-vsctl[47232]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c4fa18b6-ed0f-47ac-8eec-d1399749aa8e\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 22 08:26:34 np0005592159 ovs-ctl[47167]: Configuring Open vSwitch system IDs [  OK  ]
Jan 22 08:26:34 np0005592159 ovs-ctl[47167]: Enabling remote OVSDB managers [  OK  ]
Jan 22 08:26:34 np0005592159 systemd[1]: Started Open vSwitch Database Unit.
Jan 22 08:26:34 np0005592159 ovs-vsctl[47241]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Jan 22 08:26:34 np0005592159 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 22 08:26:34 np0005592159 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 22 08:26:34 np0005592159 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 22 08:26:34 np0005592159 kernel: openvswitch: Open vSwitch switching datapath
Jan 22 08:26:34 np0005592159 ovs-ctl[47285]: Inserting openvswitch module [  OK  ]
Jan 22 08:26:34 np0005592159 ovs-ctl[47254]: Starting ovs-vswitchd [  OK  ]
Jan 22 08:26:34 np0005592159 ovs-vsctl[47303]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Jan 22 08:26:34 np0005592159 ovs-ctl[47254]: Enabling remote OVSDB managers [  OK  ]
Jan 22 08:26:34 np0005592159 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 22 08:26:34 np0005592159 systemd[1]: Starting Open vSwitch...
Jan 22 08:26:34 np0005592159 systemd[1]: Finished Open vSwitch.
Jan 22 08:26:36 np0005592159 python3.9[47454]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:26:38 np0005592159 python3.9[47606]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 22 08:26:39 np0005592159 kernel: SELinux:  Converting 2750 SID table entries...
Jan 22 08:26:39 np0005592159 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:26:39 np0005592159 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:26:39 np0005592159 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:26:39 np0005592159 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:26:39 np0005592159 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:26:39 np0005592159 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:26:39 np0005592159 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:26:40 np0005592159 python3.9[47762]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:26:41 np0005592159 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 22 08:26:41 np0005592159 python3.9[47920]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:26:44 np0005592159 python3.9[48073]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:26:46 np0005592159 python3.9[48360]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 22 08:26:46 np0005592159 python3.9[48510]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:26:47 np0005592159 python3.9[48664]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:26:50 np0005592159 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:26:50 np0005592159 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:26:50 np0005592159 systemd[1]: Reloading.
Jan 22 08:26:50 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:26:50 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:26:50 np0005592159 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:26:51 np0005592159 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:26:51 np0005592159 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:26:51 np0005592159 systemd[1]: run-rf9eb6405d7ff4db9af28804d8ddafea6.service: Deactivated successfully.
Jan 22 08:26:52 np0005592159 python3.9[48982]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:26:52 np0005592159 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 22 08:26:52 np0005592159 systemd[1]: Stopped Network Manager Wait Online.
Jan 22 08:26:52 np0005592159 systemd[1]: Stopping Network Manager Wait Online...
Jan 22 08:26:52 np0005592159 systemd[1]: Stopping Network Manager...
Jan 22 08:26:52 np0005592159 NetworkManager[7199]: <info>  [1769088412.6801] caught SIGTERM, shutting down normally.
Jan 22 08:26:52 np0005592159 NetworkManager[7199]: <info>  [1769088412.6826] dhcp4 (eth0): canceled DHCP transaction
Jan 22 08:26:52 np0005592159 NetworkManager[7199]: <info>  [1769088412.6827] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 08:26:52 np0005592159 NetworkManager[7199]: <info>  [1769088412.6827] dhcp4 (eth0): state changed no lease
Jan 22 08:26:52 np0005592159 NetworkManager[7199]: <info>  [1769088412.6834] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 08:26:52 np0005592159 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 08:26:52 np0005592159 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 08:26:52 np0005592159 NetworkManager[7199]: <info>  [1769088412.8911] exiting (success)
Jan 22 08:26:52 np0005592159 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 22 08:26:52 np0005592159 systemd[1]: Stopped Network Manager.
Jan 22 08:26:52 np0005592159 systemd[1]: NetworkManager.service: Consumed 12.789s CPU time, 4.1M memory peak, read 0B from disk, written 41.5K to disk.
Jan 22 08:26:52 np0005592159 systemd[1]: Starting Network Manager...
Jan 22 08:26:52 np0005592159 NetworkManager[49000]: <info>  [1769088412.9635] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:24f4eb82-7451-47a9-a2ab-85f318c16b8a)
Jan 22 08:26:52 np0005592159 NetworkManager[49000]: <info>  [1769088412.9636] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 22 08:26:52 np0005592159 NetworkManager[49000]: <info>  [1769088412.9697] manager[0x55e326179000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 22 08:26:52 np0005592159 systemd[1]: Starting Hostname Service...
Jan 22 08:26:53 np0005592159 systemd[1]: Started Hostname Service.
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0600] hostname: hostname: using hostnamed
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0600] hostname: static hostname changed from (none) to "compute-2"
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0606] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0611] manager[0x55e326179000]: rfkill: Wi-Fi hardware radio set enabled
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0612] manager[0x55e326179000]: rfkill: WWAN hardware radio set enabled
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0633] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0642] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0643] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0644] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0645] manager: Networking is enabled by state file
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0647] settings: Loaded settings plugin: keyfile (internal)
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0651] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0681] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0694] dhcp: init: Using DHCP client 'internal'
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0696] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0700] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0705] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0716] device (lo): Activation: starting connection 'lo' (4169075c-72f8-4434-940a-1a390ca696d3)
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0722] device (eth0): carrier: link connected
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0726] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0732] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0734] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0741] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0746] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0752] device (eth1): carrier: link connected
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0756] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0762] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (dcaea49a-a5c5-5229-9667-55a0529b8fba) (indicated)
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0762] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0769] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0777] device (eth1): Activation: starting connection 'ci-private-network' (dcaea49a-a5c5-5229-9667-55a0529b8fba)
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0784] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 22 08:26:53 np0005592159 systemd[1]: Started Network Manager.
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0794] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0797] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0799] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0803] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0806] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0809] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0813] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0818] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0826] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0830] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0839] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0851] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0861] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0866] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0872] device (lo): Activation: successful, device activated.
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0879] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0882] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0885] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0888] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0890] device (eth1): Activation: successful, device activated.
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.0901] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 22 08:26:53 np0005592159 systemd[1]: Starting Network Manager Wait Online...
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.1932] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.2004] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.2005] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.2008] manager: NetworkManager state is now CONNECTED_SITE
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.2010] device (eth0): Activation: successful, device activated.
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.2016] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 22 08:26:53 np0005592159 NetworkManager[49000]: <info>  [1769088413.2356] manager: startup complete
Jan 22 08:26:53 np0005592159 systemd[1]: Finished Network Manager Wait Online.
Jan 22 08:26:54 np0005592159 python3.9[49208]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:27:03 np0005592159 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 08:27:06 np0005592159 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:27:06 np0005592159 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:27:06 np0005592159 systemd[1]: Reloading.
Jan 22 08:27:06 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:27:06 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:27:06 np0005592159 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:27:07 np0005592159 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:27:07 np0005592159 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:27:07 np0005592159 systemd[1]: run-r22677baaabb740128278b5f46fbd6980.service: Deactivated successfully.
Jan 22 08:27:08 np0005592159 python3.9[49668]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:27:09 np0005592159 python3.9[49820]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:10 np0005592159 python3.9[49974]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:11 np0005592159 python3.9[50126]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:12 np0005592159 python3.9[50278]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:12 np0005592159 python3.9[50430]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:13 np0005592159 python3.9[50582]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:27:14 np0005592159 python3.9[50705]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088433.2086165-649-175126026326719/.source _original_basename=.9x2j16ri follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:15 np0005592159 python3.9[50857]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:16 np0005592159 python3.9[51009]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 22 08:27:17 np0005592159 python3.9[51161]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:20 np0005592159 python3.9[51588]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 22 08:27:21 np0005592159 ansible-async_wrapper.py[51763]: Invoked with j398345004378 300 /home/zuul/.ansible/tmp/ansible-tmp-1769088440.5646186-847-154640478254585/AnsiballZ_edpm_os_net_config.py _
Jan 22 08:27:21 np0005592159 ansible-async_wrapper.py[51766]: Starting module and watcher
Jan 22 08:27:21 np0005592159 ansible-async_wrapper.py[51766]: Start watching 51767 (300)
Jan 22 08:27:21 np0005592159 ansible-async_wrapper.py[51767]: Start module (51767)
Jan 22 08:27:21 np0005592159 ansible-async_wrapper.py[51763]: Return async_wrapper task started.
Jan 22 08:27:22 np0005592159 python3.9[51768]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 22 08:27:22 np0005592159 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 22 08:27:22 np0005592159 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 22 08:27:22 np0005592159 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 22 08:27:22 np0005592159 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 22 08:27:22 np0005592159 kernel: cfg80211: failed to load regulatory.db
Jan 22 08:27:23 np0005592159 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 22 08:27:23 np0005592159 NetworkManager[49000]: <info>  [1769088443.9308] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51769 uid=0 result="success"
Jan 22 08:27:23 np0005592159 NetworkManager[49000]: <info>  [1769088443.9331] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51769 uid=0 result="success"
Jan 22 08:27:23 np0005592159 NetworkManager[49000]: <info>  [1769088443.9908] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 22 08:27:23 np0005592159 NetworkManager[49000]: <info>  [1769088443.9910] audit: op="connection-add" uuid="794ece31-c950-47e0-b112-d35532234c80" name="br-ex-br" pid=51769 uid=0 result="success"
Jan 22 08:27:23 np0005592159 NetworkManager[49000]: <info>  [1769088443.9927] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 22 08:27:23 np0005592159 NetworkManager[49000]: <info>  [1769088443.9929] audit: op="connection-add" uuid="cfa747da-58e6-4689-922d-9de70c75d190" name="br-ex-port" pid=51769 uid=0 result="success"
Jan 22 08:27:23 np0005592159 NetworkManager[49000]: <info>  [1769088443.9944] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 22 08:27:23 np0005592159 NetworkManager[49000]: <info>  [1769088443.9945] audit: op="connection-add" uuid="01d1f839-e308-47fa-9552-b2bf782de783" name="eth1-port" pid=51769 uid=0 result="success"
Jan 22 08:27:23 np0005592159 NetworkManager[49000]: <info>  [1769088443.9959] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 22 08:27:23 np0005592159 NetworkManager[49000]: <info>  [1769088443.9961] audit: op="connection-add" uuid="653c7484-2f46-4a61-bebe-aeb46aee2b4d" name="vlan20-port" pid=51769 uid=0 result="success"
Jan 22 08:27:23 np0005592159 NetworkManager[49000]: <info>  [1769088443.9978] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 22 08:27:23 np0005592159 NetworkManager[49000]: <info>  [1769088443.9980] audit: op="connection-add" uuid="7c6775d8-492e-4c9b-b693-de0f747bcd4b" name="vlan21-port" pid=51769 uid=0 result="success"
Jan 22 08:27:23 np0005592159 NetworkManager[49000]: <info>  [1769088443.9992] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 22 08:27:23 np0005592159 NetworkManager[49000]: <info>  [1769088443.9994] audit: op="connection-add" uuid="d4d9d3b7-2ffe-45f3-93ea-99b12d620658" name="vlan22-port" pid=51769 uid=0 result="success"
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.0007] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.0009] audit: op="connection-add" uuid="510a7eb7-fa56-416d-80a8-585e183c87cb" name="vlan23-port" pid=51769 uid=0 result="success"
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.0030] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51769 uid=0 result="success"
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.0046] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.0048] audit: op="connection-add" uuid="cc2bdf83-bde6-4891-9ac7-1a16d6d2c96a" name="br-ex-if" pid=51769 uid=0 result="success"
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.1829] audit: op="connection-update" uuid="dcaea49a-a5c5-5229-9667-55a0529b8fba" name="ci-private-network" args="ipv6.routes,ipv6.routing-rules,ipv6.addresses,ipv6.dns,ipv6.addr-gen-mode,ipv6.method,ovs-interface.type,connection.timestamp,connection.master,connection.slave-type,connection.controller,connection.port-type,ipv4.never-default,ipv4.routes,ipv4.routing-rules,ipv4.addresses,ipv4.dns,ipv4.method,ovs-external-ids.data" pid=51769 uid=0 result="success"
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.1864] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.1867] audit: op="connection-add" uuid="874673f3-da52-46f1-a439-0fc3d630c8a5" name="vlan20-if" pid=51769 uid=0 result="success"
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.1897] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.1899] audit: op="connection-add" uuid="20424f20-d962-437e-b725-715685dd4a3c" name="vlan21-if" pid=51769 uid=0 result="success"
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.1928] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.1931] audit: op="connection-add" uuid="74049661-d3e8-4640-8857-4d3b9096f66b" name="vlan22-if" pid=51769 uid=0 result="success"
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.1962] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.1965] audit: op="connection-add" uuid="03984fbf-a87a-4009-9d20-112f7b9dc3f6" name="vlan23-if" pid=51769 uid=0 result="success"
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.1986] audit: op="connection-delete" uuid="128e382a-734b-354e-b29c-4c5a72c08cb7" name="Wired connection 1" pid=51769 uid=0 result="success"
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2007] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <warn>  [1769088444.2011] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2025] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2032] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (794ece31-c950-47e0-b112-d35532234c80)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2033] audit: op="connection-activate" uuid="794ece31-c950-47e0-b112-d35532234c80" name="br-ex-br" pid=51769 uid=0 result="success"
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2036] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <warn>  [1769088444.2037] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2047] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2054] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (cfa747da-58e6-4689-922d-9de70c75d190)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2057] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <warn>  [1769088444.2058] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2067] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2075] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (01d1f839-e308-47fa-9552-b2bf782de783)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2078] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <warn>  [1769088444.2079] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2087] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2095] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (653c7484-2f46-4a61-bebe-aeb46aee2b4d)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2098] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <warn>  [1769088444.2099] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2109] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2116] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (7c6775d8-492e-4c9b-b693-de0f747bcd4b)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2118] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <warn>  [1769088444.2121] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2130] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2139] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (d4d9d3b7-2ffe-45f3-93ea-99b12d620658)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2142] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <warn>  [1769088444.2145] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2155] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2164] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (510a7eb7-fa56-416d-80a8-585e183c87cb)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2166] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2170] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2173] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2184] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <warn>  [1769088444.2185] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2189] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2196] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (cc2bdf83-bde6-4891-9ac7-1a16d6d2c96a)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2197] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2204] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2208] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2210] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2213] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2231] device (eth1): disconnecting for new activation request.
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2232] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2238] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2242] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2244] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2248] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <warn>  [1769088444.2250] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2255] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2262] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (874673f3-da52-46f1-a439-0fc3d630c8a5)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2263] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2268] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2271] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2273] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2278] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <warn>  [1769088444.2280] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2284] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2291] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (20424f20-d962-437e-b725-715685dd4a3c)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2293] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2298] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2301] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2303] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2307] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <warn>  [1769088444.2309] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2314] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2320] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (74049661-d3e8-4640-8857-4d3b9096f66b)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2322] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2327] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2330] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2332] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2337] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <warn>  [1769088444.2338] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2341] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2346] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (03984fbf-a87a-4009-9d20-112f7b9dc3f6)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2347] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2349] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2351] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2352] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2354] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2370] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout" pid=51769 uid=0 result="success"
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2372] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2377] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2379] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2387] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2391] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2394] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2397] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2398] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2403] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2407] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2410] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2411] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2416] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2420] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2423] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2424] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2429] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2433] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2436] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2437] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2441] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2446] dhcp4 (eth0): canceled DHCP transaction
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2446] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2446] dhcp4 (eth0): state changed no lease
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2448] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2467] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51769 uid=0 result="fail" reason="Device is not activated"
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2529] device (eth1): disconnecting for new activation request.
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2530] audit: op="connection-activate" uuid="dcaea49a-a5c5-5229-9667-55a0529b8fba" name="ci-private-network" pid=51769 uid=0 result="success"
Jan 22 08:27:24 np0005592159 NetworkManager[49000]: <info>  [1769088444.2618] dhcp4 (eth0): state changed new lease, address=38.102.83.5
Jan 22 08:27:24 np0005592159 kernel: ovs-system: entered promiscuous mode
Jan 22 08:27:24 np0005592159 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 22 08:27:24 np0005592159 kernel: Timeout policy base is empty
Jan 22 08:27:24 np0005592159 systemd-udevd[51775]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 08:27:24 np0005592159 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 22 08:27:24 np0005592159 kernel: br-ex: entered promiscuous mode
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1775] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1797] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1808] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1813] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51769 uid=0 result="success"
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1814] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 22 08:27:25 np0005592159 kernel: vlan20: entered promiscuous mode
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1953] device (eth1): Activation: starting connection 'ci-private-network' (dcaea49a-a5c5-5229-9667-55a0529b8fba)
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1960] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1963] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1966] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1968] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1971] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1973] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1975] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1980] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.1995] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2002] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 kernel: vlan21: entered promiscuous mode
Jan 22 08:27:25 np0005592159 systemd-udevd[51773]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2024] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2034] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2048] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2059] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2067] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2075] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2081] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2096] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2099] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2102] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2105] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2107] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2110] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2113] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2122] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 22 08:27:25 np0005592159 kernel: vlan22: entered promiscuous mode
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2134] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2142] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2144] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2146] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2160] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 22 08:27:25 np0005592159 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2429] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2430] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2454] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2465] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2471] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2484] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 kernel: vlan23: entered promiscuous mode
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.2520] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.4472] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.4473] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.4476] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.4478] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.4484] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.4491] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.4501] device (eth1): Activation: successful, device activated.
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.4508] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.4516] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.4526] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.4540] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.4586] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.4598] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 python3.9[52094]: ansible-ansible.legacy.async_status Invoked with jid=j398345004378.51763 mode=status _async_dir=/root/.ansible_async
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.7518] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.7525] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.7533] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.7544] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.7555] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 22 08:27:25 np0005592159 NetworkManager[49000]: <info>  [1769088445.7564] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 22 08:27:26 np0005592159 ansible-async_wrapper.py[51766]: 51767 still running (300)
Jan 22 08:27:26 np0005592159 NetworkManager[49000]: <info>  [1769088446.9385] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51769 uid=0 result="success"
Jan 22 08:27:27 np0005592159 NetworkManager[49000]: <info>  [1769088447.0795] checkpoint[0x55e32614f950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 22 08:27:27 np0005592159 NetworkManager[49000]: <info>  [1769088447.0797] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51769 uid=0 result="success"
Jan 22 08:27:27 np0005592159 NetworkManager[49000]: <info>  [1769088447.4760] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51769 uid=0 result="success"
Jan 22 08:27:27 np0005592159 NetworkManager[49000]: <info>  [1769088447.4774] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51769 uid=0 result="success"
Jan 22 08:27:27 np0005592159 NetworkManager[49000]: <info>  [1769088447.8138] audit: op="networking-control" arg="global-dns-configuration" pid=51769 uid=0 result="success"
Jan 22 08:27:27 np0005592159 NetworkManager[49000]: <info>  [1769088447.8215] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 22 08:27:27 np0005592159 NetworkManager[49000]: <info>  [1769088447.8518] audit: op="networking-control" arg="global-dns-configuration" pid=51769 uid=0 result="success"
Jan 22 08:27:27 np0005592159 NetworkManager[49000]: <info>  [1769088447.8539] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51769 uid=0 result="success"
Jan 22 08:27:28 np0005592159 NetworkManager[49000]: <info>  [1769088448.0119] checkpoint[0x55e32614fa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 22 08:27:28 np0005592159 NetworkManager[49000]: <info>  [1769088448.0122] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51769 uid=0 result="success"
Jan 22 08:27:28 np0005592159 ansible-async_wrapper.py[51767]: Module complete (51767)
Jan 22 08:27:29 np0005592159 python3.9[52234]: ansible-ansible.legacy.async_status Invoked with jid=j398345004378.51763 mode=status _async_dir=/root/.ansible_async
Jan 22 08:27:29 np0005592159 python3.9[52334]: ansible-ansible.legacy.async_status Invoked with jid=j398345004378.51763 mode=cleanup _async_dir=/root/.ansible_async
Jan 22 08:27:30 np0005592159 python3.9[52486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:27:31 np0005592159 python3.9[52609]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088450.1840255-928-103982853368098/.source.returncode _original_basename=.u3cimw6s follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:31 np0005592159 ansible-async_wrapper.py[51766]: Done in kid B.
Jan 22 08:27:32 np0005592159 python3.9[52761]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:27:33 np0005592159 python3.9[52885]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088451.8767743-976-127544732575938/.source.cfg _original_basename=.nin0l4pl follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:27:33 np0005592159 python3.9[53037]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:27:33 np0005592159 systemd[1]: Reloading Network Manager...
Jan 22 08:27:33 np0005592159 NetworkManager[49000]: <info>  [1769088453.9419] audit: op="reload" arg="0" pid=53041 uid=0 result="success"
Jan 22 08:27:33 np0005592159 NetworkManager[49000]: <info>  [1769088453.9425] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 22 08:27:34 np0005592159 systemd[1]: Reloaded Network Manager.
Jan 22 08:27:35 np0005592159 systemd-logind[787]: Session 11 logged out. Waiting for processes to exit.
Jan 22 08:27:35 np0005592159 systemd[1]: session-11.scope: Deactivated successfully.
Jan 22 08:27:35 np0005592159 systemd[1]: session-11.scope: Consumed 50.847s CPU time.
Jan 22 08:27:35 np0005592159 systemd-logind[787]: Removed session 11.
Jan 22 08:27:40 np0005592159 systemd-logind[787]: New session 12 of user zuul.
Jan 22 08:27:40 np0005592159 systemd[1]: Started Session 12 of User zuul.
Jan 22 08:27:41 np0005592159 python3.9[53227]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:27:42 np0005592159 python3.9[53382]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:27:44 np0005592159 python3.9[53575]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:27:44 np0005592159 systemd[1]: session-12.scope: Deactivated successfully.
Jan 22 08:27:44 np0005592159 systemd[1]: session-12.scope: Consumed 2.369s CPU time.
Jan 22 08:27:44 np0005592159 systemd-logind[787]: Session 12 logged out. Waiting for processes to exit.
Jan 22 08:27:44 np0005592159 systemd-logind[787]: Removed session 12.
Jan 22 08:27:44 np0005592159 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 22 08:27:50 np0005592159 systemd-logind[787]: New session 13 of user zuul.
Jan 22 08:27:50 np0005592159 systemd[1]: Started Session 13 of User zuul.
Jan 22 08:27:51 np0005592159 python3.9[53759]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:27:52 np0005592159 python3.9[53914]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:27:53 np0005592159 python3.9[54070]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:27:54 np0005592159 python3.9[54154]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:27:57 np0005592159 python3.9[54308]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:27:58 np0005592159 python3.9[54503]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:00 np0005592159 python3.9[54655]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:28:00 np0005592159 systemd[1]: var-lib-containers-storage-overlay-compat3623228423-merged.mount: Deactivated successfully.
Jan 22 08:28:00 np0005592159 podman[54656]: 2026-01-22 13:28:00.666754998 +0000 UTC m=+0.477533541 system refresh
Jan 22 08:28:01 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:28:01 np0005592159 python3.9[54818]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:02 np0005592159 python3.9[54941]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088480.9226375-198-133785573338893/.source.json follow=False _original_basename=podman_network_config.j2 checksum=0c46a80e07b38ef47d30b351f23b4c464d4715e8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:03 np0005592159 python3.9[55093]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:03 np0005592159 python3.9[55216]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088482.6841474-244-90054794372535/.source.conf follow=False _original_basename=registries.conf.j2 checksum=5a3e69bacb50e2daad69ea0ffc6501536059b061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:04 np0005592159 python3.9[55368]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:05 np0005592159 python3.9[55520]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:06 np0005592159 python3.9[55672]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:06 np0005592159 python3.9[55824]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:07 np0005592159 python3.9[55976]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:28:10 np0005592159 python3.9[56129]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:28:11 np0005592159 python3.9[56283]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:28:11 np0005592159 python3.9[56435]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:28:12 np0005592159 python3.9[56587]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:28:13 np0005592159 python3.9[56740]: ansible-service_facts Invoked
Jan 22 08:28:13 np0005592159 network[56757]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:28:13 np0005592159 network[56758]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:28:13 np0005592159 network[56759]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:28:20 np0005592159 python3.9[57211]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:28:23 np0005592159 python3.9[57365]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 22 08:28:25 np0005592159 python3.9[57518]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:26 np0005592159 python3.9[57643]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088504.9399352-677-215326738337300/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:26 np0005592159 python3.9[57797]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:27 np0005592159 python3.9[57922]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088506.4549255-721-243239578315662/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:29 np0005592159 python3.9[58076]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:31 np0005592159 python3.9[58230]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:28:32 np0005592159 python3.9[58314]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:28:34 np0005592159 python3.9[58468]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:28:34 np0005592159 python3.9[58552]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:28:34 np0005592159 chronyd[798]: chronyd exiting
Jan 22 08:28:34 np0005592159 systemd[1]: Stopping NTP client/server...
Jan 22 08:28:34 np0005592159 systemd[1]: chronyd.service: Deactivated successfully.
Jan 22 08:28:34 np0005592159 systemd[1]: Stopped NTP client/server.
Jan 22 08:28:34 np0005592159 systemd[1]: Starting NTP client/server...
Jan 22 08:28:34 np0005592159 chronyd[58561]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 22 08:28:34 np0005592159 chronyd[58561]: Frequency -26.130 +/- 0.081 ppm read from /var/lib/chrony/drift
Jan 22 08:28:34 np0005592159 chronyd[58561]: Loaded seccomp filter (level 2)
Jan 22 08:28:34 np0005592159 systemd[1]: Started NTP client/server.
Jan 22 08:28:35 np0005592159 systemd[1]: session-13.scope: Deactivated successfully.
Jan 22 08:28:35 np0005592159 systemd[1]: session-13.scope: Consumed 26.825s CPU time.
Jan 22 08:28:35 np0005592159 systemd-logind[787]: Session 13 logged out. Waiting for processes to exit.
Jan 22 08:28:35 np0005592159 systemd-logind[787]: Removed session 13.
Jan 22 08:28:41 np0005592159 systemd-logind[787]: New session 14 of user zuul.
Jan 22 08:28:41 np0005592159 systemd[1]: Started Session 14 of User zuul.
Jan 22 08:28:43 np0005592159 python3.9[58742]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:44 np0005592159 python3.9[58894]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:45 np0005592159 python3.9[59017]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088523.6926246-64-62528235904327/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:45 np0005592159 systemd[1]: session-14.scope: Deactivated successfully.
Jan 22 08:28:45 np0005592159 systemd[1]: session-14.scope: Consumed 1.606s CPU time.
Jan 22 08:28:45 np0005592159 systemd-logind[787]: Session 14 logged out. Waiting for processes to exit.
Jan 22 08:28:45 np0005592159 systemd-logind[787]: Removed session 14.
Jan 22 08:28:51 np0005592159 systemd-logind[787]: New session 15 of user zuul.
Jan 22 08:28:51 np0005592159 systemd[1]: Started Session 15 of User zuul.
Jan 22 08:28:52 np0005592159 python3.9[59195]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:28:53 np0005592159 python3.9[59351]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:55 np0005592159 python3.9[59526]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:55 np0005592159 python3.9[59649]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1769088534.295648-85-44645175631/.source.json _original_basename=.20cdxh6o follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:56 np0005592159 python3.9[59801]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:28:57 np0005592159 python3.9[59924]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088536.4608045-155-178240899965189/.source _original_basename=.y0j3uq0e follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:28:58 np0005592159 python3.9[60076]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:28:59 np0005592159 python3.9[60228]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:00 np0005592159 python3.9[60351]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088539.0509212-227-28165186001298/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:29:00 np0005592159 python3.9[60503]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:01 np0005592159 python3.9[60626]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769088540.378298-227-224958880369007/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:29:02 np0005592159 python3.9[60780]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:03 np0005592159 python3.9[60932]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:05 np0005592159 python3.9[61057]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088542.9345155-338-206018036371495/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:06 np0005592159 python3.9[61209]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:06 np0005592159 python3.9[61332]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088545.6803098-383-60318238510431/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:07 np0005592159 python3.9[61485]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:07 np0005592159 systemd[1]: Reloading.
Jan 22 08:29:08 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:08 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:08 np0005592159 systemd[1]: Reloading.
Jan 22 08:29:08 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:08 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:08 np0005592159 systemd[1]: Starting EDPM Container Shutdown...
Jan 22 08:29:08 np0005592159 systemd[1]: Finished EDPM Container Shutdown.
Jan 22 08:29:09 np0005592159 python3.9[61713]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:09 np0005592159 python3.9[61836]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088548.9560397-452-46764105561815/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:10 np0005592159 python3.9[61988]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:11 np0005592159 python3.9[62111]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088550.2500665-496-31084238662092/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:12 np0005592159 python3.9[62263]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:12 np0005592159 systemd[1]: Reloading.
Jan 22 08:29:12 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:12 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:12 np0005592159 systemd[1]: Reloading.
Jan 22 08:29:12 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:12 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:12 np0005592159 systemd[1]: Starting Create netns directory...
Jan 22 08:29:12 np0005592159 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 08:29:12 np0005592159 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 08:29:12 np0005592159 systemd[1]: Finished Create netns directory.
Jan 22 08:29:13 np0005592159 python3.9[62491]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:29:13 np0005592159 network[62508]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:29:13 np0005592159 network[62509]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:29:13 np0005592159 network[62510]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:29:20 np0005592159 python3.9[62773]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:20 np0005592159 systemd[1]: Reloading.
Jan 22 08:29:20 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:20 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:20 np0005592159 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 22 08:29:20 np0005592159 iptables.init[62813]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 22 08:29:21 np0005592159 iptables.init[62813]: iptables: Flushing firewall rules: [  OK  ]
Jan 22 08:29:21 np0005592159 systemd[1]: iptables.service: Deactivated successfully.
Jan 22 08:29:21 np0005592159 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 22 08:29:21 np0005592159 python3.9[63009]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:22 np0005592159 python3.9[63163]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:29:23 np0005592159 systemd[1]: Reloading.
Jan 22 08:29:23 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:29:23 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:29:23 np0005592159 systemd[1]: Starting Netfilter Tables...
Jan 22 08:29:23 np0005592159 systemd[1]: Finished Netfilter Tables.
Jan 22 08:29:29 np0005592159 python3.9[63356]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:29:30 np0005592159 python3.9[63509]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:31 np0005592159 python3.9[63634]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088570.1239164-703-250404812380356/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:32 np0005592159 python3.9[63787]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:29:32 np0005592159 systemd[1]: Reloading OpenSSH server daemon...
Jan 22 08:29:32 np0005592159 systemd[1]: Reloaded OpenSSH server daemon.
Jan 22 08:29:33 np0005592159 python3.9[63943]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:34 np0005592159 python3.9[64095]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:34 np0005592159 python3.9[64218]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088573.713581-797-115470744774321/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:36 np0005592159 python3.9[64370]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 08:29:36 np0005592159 systemd[1]: Starting Time & Date Service...
Jan 22 08:29:36 np0005592159 systemd[1]: Started Time & Date Service.
Jan 22 08:29:37 np0005592159 python3.9[64526]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:37 np0005592159 python3.9[64678]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:38 np0005592159 python3.9[64801]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088577.2856202-902-208474160571953/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:39 np0005592159 python3.9[64953]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:39 np0005592159 python3.9[65076]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769088578.6656504-947-253414018419296/.source.yaml _original_basename=.05vf8810 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:40 np0005592159 python3.9[65228]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:40 np0005592159 python3.9[65351]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088579.9630961-992-58148605538754/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:41 np0005592159 python3.9[65503]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:29:42 np0005592159 python3.9[65656]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:29:43 np0005592159 python3[65809]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 08:29:44 np0005592159 python3.9[65961]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:44 np0005592159 python3.9[66084]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088583.799714-1109-168153337736363/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:45 np0005592159 python3.9[66236]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:46 np0005592159 python3.9[66359]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088585.1887827-1154-177474045876685/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:47 np0005592159 python3.9[66511]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:47 np0005592159 python3.9[66634]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088586.7367969-1199-258533528063595/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:48 np0005592159 python3.9[66786]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:49 np0005592159 python3.9[66909]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088588.020991-1244-200085709639151/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:50 np0005592159 python3.9[67061]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:29:50 np0005592159 python3.9[67184]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769088589.3809493-1289-119630266069739/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:51 np0005592159 python3.9[67336]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:52 np0005592159 python3.9[67488]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:29:53 np0005592159 python3.9[67647]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:54 np0005592159 python3.9[67800]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:54 np0005592159 python3.9[67952]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:29:55 np0005592159 python3.9[68104]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 08:29:56 np0005592159 python3.9[68257]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 08:29:57 np0005592159 systemd[1]: session-15.scope: Deactivated successfully.
Jan 22 08:29:57 np0005592159 systemd[1]: session-15.scope: Consumed 37.675s CPU time.
Jan 22 08:29:57 np0005592159 systemd-logind[787]: Session 15 logged out. Waiting for processes to exit.
Jan 22 08:29:57 np0005592159 systemd-logind[787]: Removed session 15.
Jan 22 08:30:03 np0005592159 systemd-logind[787]: New session 16 of user zuul.
Jan 22 08:30:03 np0005592159 systemd[1]: Started Session 16 of User zuul.
Jan 22 08:30:04 np0005592159 python3.9[68438]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 22 08:30:06 np0005592159 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 08:30:06 np0005592159 python3.9[68593]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:30:08 np0005592159 python3.9[68745]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:30:09 np0005592159 python3.9[68897]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCz1S+AyqG+uG2QcnBxDRKRCSQ1ADb7AX9YKwfPf8jy0Q8YD3aJm/CVexcMyR1BQUaGjRFoZkm/O4ekVQ36cOQ2M7HRv78pGNm0BGtfNeFeRB5w5+RSPgj1rY9joGiRIZoyVVlz9uuM9NTlYiNC/X5gLWfreUbCGl6lDKkxGdOjUnjuZ2djcx48WXZurkkcjd9j3WCQl899CDpx6elTEEZaV3/mbpfEtOtTXEFfoq1Z1XSjngnkZMARqt+JIN02f6kgEgWNSRAJxqYbFz1jtY43UJ/C2mO29LedfXOW3dpKCC6QHdPDSQJp2Jrf0izl52jvmpDvr6wWY9PW9AmMyxh1gSuP1a/uteKBBf7vlxtpYJWDSivQxPZw3RbBZuhspxefEOUXkwGNycW/+rPGFZRrAVYWLTZ6dLn0aviyE1+ZEDIMJop1CohPOhvJxJ7s1ulnjvVDc7kLhmBewXbeY3Lp6SoMUK8ziKHsTr2Y/RfK8d7LXmARc7+O9VWI4VVV8U=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIArjsNRQko0Q06DDAhSCoRYTLidRzR9vGa18TMghIrTh#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBDfBKVIdWmS1D3kNVJYnvsERskkDp7/TXgEseqOABxcNISULCvy6hWTcKYjXdFK5Yrl53dvxfzzAGTPPln3an4=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDARChhswCxxjhho4qSL0BKXUq4AvMW1MDxy3K15MpkFlnctOqsuulAZum+3JFif15RegZjzUC7sGyhSLoFUnXimQHlJIlaGg+Vr+vh23ujuk8uWbwf6q8CF03tz4edapNjNQ+SCuGRJkINMaGGTzgBwoStqctW97kU0Z+A4cqgyMG8V8ZvSG7it0puvEOIYw5rtCA7Svueoxb5UMO33HTJbIuILYxnfEyUIHSsziJHGhRFJJ7PcNH3B4Ogew4pg31GaTi9pIHKHt/YE6WKj7P7HxpTVvgBsI27Pveo4PPkH4yCwjZlntIAvJhn+6czWlsTsmf+EUSf+u1mst9EmzJ/BztwNxcUjlAkf1E3UzoEKB70ShX+201s+/Z9VrHZj4Ku7Ptht9N5F8J01j2+qYCnmeLK9AWqkanEZy5N+hICP1XbFk3IlKyUW4Km0CXwZmXlvdC5Juyt74uJfeiNcsarU75daE2Zx4+j76+JtN8BKgrIAzEcyLOLCOxspAtxGB8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILuPMhHnuBKJH3E1cndLaLMVE35g920qreV5wjp7kiGA#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMjB1VLvlmcfY82jQpLEcCHkJB16T8jGBBdZAl8DHhdWgqjciDgZx2zOlmbn8OtO4dCPZsLT8VomlJYVqIcvuZ4=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2ocldELG9EA3TbFx5afl1mbwf9X+3Gzx1pKWvAq8+0s5gE2NeAD23paYiiaQ+/r8QE6CHtXOoy/H9FGAGU3oxMrZnEX7nslelo1+Q7jWdE7ILrzUhQpkJeXJNMrA3p7aBbMxEqMXO9Ydl3Cu0CA+jItIQW1oTWLvS+BsWbES09z++jcPgu6HJu1lFXD9GgU53AfhpFcnhuxK8AnNyG1iy1Zus5Xi2NlME94THioW0/1Ek8Pl/PbSdpaErM1lgrZ7Yl/MdCelTNQI4tQrJebtNynEMhrYTBwbruS6YIia/ZSxDJZWt9bg1dpkd24KSpr4hz5kDn4sCFHyPV/JMYmuvTwFByBXc92tBbYeQU5KMBP8OFjlzfm1uAfnM1BOyrPOy7E5RFig010mTP/VruBFb/T+3Z9DqjZCkGagdrKrV80AwqnAsn/mMG/tHarrHLr8BRX1UIFUz2qfFaBpSkmeQ6u3ERLQyvJIjXaXjvvmQVDRQxd8P5HWM57joMC2P+c8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFTUVWfsHbDnQr7ZM9BkSRv9ghRtTlzwZgmDm9W4jCII#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGjBy4pT9xvRinN5D7FG54iZjTb5U7Le6fRnUKrD4anfJZQ1Vd0mJxikxxi0T2VsVngeW+U82a0S7cK3UeWIL9s=#012 create=True mode=0644 path=/tmp/ansible.3fn2oeoe state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:30:10 np0005592159 python3.9[69049]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.3fn2oeoe' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:10 np0005592159 python3.9[69203]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.3fn2oeoe state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:30:11 np0005592159 systemd[1]: session-16.scope: Deactivated successfully.
Jan 22 08:30:11 np0005592159 systemd[1]: session-16.scope: Consumed 3.552s CPU time.
Jan 22 08:30:11 np0005592159 systemd-logind[787]: Session 16 logged out. Waiting for processes to exit.
Jan 22 08:30:11 np0005592159 systemd-logind[787]: Removed session 16.
Jan 22 08:30:17 np0005592159 systemd-logind[787]: New session 17 of user zuul.
Jan 22 08:30:17 np0005592159 systemd[1]: Started Session 17 of User zuul.
Jan 22 08:30:18 np0005592159 python3.9[69381]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:30:20 np0005592159 python3.9[69537]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 08:30:20 np0005592159 python3.9[69691]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:30:21 np0005592159 python3.9[69844]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:22 np0005592159 python3.9[69997]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:30:23 np0005592159 python3.9[70151]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:24 np0005592159 python3.9[70306]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:30:24 np0005592159 systemd[1]: session-17.scope: Deactivated successfully.
Jan 22 08:30:24 np0005592159 systemd[1]: session-17.scope: Consumed 4.176s CPU time.
Jan 22 08:30:24 np0005592159 systemd-logind[787]: Session 17 logged out. Waiting for processes to exit.
Jan 22 08:30:24 np0005592159 systemd-logind[787]: Removed session 17.
Jan 22 08:30:30 np0005592159 systemd-logind[787]: New session 18 of user zuul.
Jan 22 08:30:30 np0005592159 systemd[1]: Started Session 18 of User zuul.
Jan 22 08:30:31 np0005592159 python3.9[70484]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:30:32 np0005592159 python3.9[70640]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:30:33 np0005592159 python3.9[70724]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 08:30:36 np0005592159 python3.9[70875]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:37 np0005592159 python3.9[71026]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:30:38 np0005592159 python3.9[71176]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:30:38 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:30:39 np0005592159 python3.9[71327]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:30:40 np0005592159 systemd[1]: session-18.scope: Deactivated successfully.
Jan 22 08:30:40 np0005592159 systemd[1]: session-18.scope: Consumed 6.169s CPU time.
Jan 22 08:30:40 np0005592159 systemd-logind[787]: Session 18 logged out. Waiting for processes to exit.
Jan 22 08:30:40 np0005592159 systemd-logind[787]: Removed session 18.
Jan 22 08:30:44 np0005592159 chronyd[58561]: Selected source 167.160.187.179 (pool.ntp.org)
Jan 22 08:30:48 np0005592159 systemd-logind[787]: New session 19 of user zuul.
Jan 22 08:30:48 np0005592159 systemd[1]: Started Session 19 of User zuul.
Jan 22 08:30:55 np0005592159 python3[72093]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:30:57 np0005592159 python3[72188]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 22 08:30:59 np0005592159 python3[72215]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 22 08:30:59 np0005592159 python3[72241]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=7G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:30:59 np0005592159 kernel: loop: module loaded
Jan 22 08:30:59 np0005592159 kernel: loop3: detected capacity change from 0 to 14680064
Jan 22 08:31:00 np0005592159 python3[72275]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:31:00 np0005592159 lvm[72278]: PV /dev/loop3 not used.
Jan 22 08:31:00 np0005592159 lvm[72280]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:31:00 np0005592159 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 22 08:31:00 np0005592159 lvm[72290]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:31:00 np0005592159 lvm[72290]: VG ceph_vg0 finished
Jan 22 08:31:00 np0005592159 lvm[72287]:  1 logical volume(s) in volume group "ceph_vg0" now active
Jan 22 08:31:00 np0005592159 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 22 08:31:00 np0005592159 python3[72368]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 22 08:31:01 np0005592159 python3[72441]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769088660.6088269-37031-193881875744097/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:31:02 np0005592159 python3[72491]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:31:02 np0005592159 systemd[1]: Reloading.
Jan 22 08:31:02 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:31:02 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:31:02 np0005592159 systemd[1]: Starting Ceph OSD losetup...
Jan 22 08:31:02 np0005592159 bash[72531]: /dev/loop3: [64513]:4328449 (/var/lib/ceph-osd-0.img)
Jan 22 08:31:02 np0005592159 systemd[1]: Finished Ceph OSD losetup.
Jan 22 08:31:02 np0005592159 lvm[72532]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:31:02 np0005592159 lvm[72532]: VG ceph_vg0 finished
Jan 22 08:31:04 np0005592159 python3[72556]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:33:33 np0005592159 systemd-logind[787]: New session 20 of user ceph-admin.
Jan 22 08:33:33 np0005592159 systemd[1]: Created slice User Slice of UID 42477.
Jan 22 08:33:33 np0005592159 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 22 08:33:33 np0005592159 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 22 08:33:33 np0005592159 systemd[1]: Starting User Manager for UID 42477...
Jan 22 08:33:33 np0005592159 systemd[72610]: Queued start job for default target Main User Target.
Jan 22 08:33:33 np0005592159 systemd[72610]: Created slice User Application Slice.
Jan 22 08:33:33 np0005592159 systemd[72610]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 22 08:33:33 np0005592159 systemd[72610]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 08:33:33 np0005592159 systemd[72610]: Reached target Paths.
Jan 22 08:33:33 np0005592159 systemd[72610]: Reached target Timers.
Jan 22 08:33:33 np0005592159 systemd[72610]: Starting D-Bus User Message Bus Socket...
Jan 22 08:33:33 np0005592159 systemd[72610]: Starting Create User's Volatile Files and Directories...
Jan 22 08:33:33 np0005592159 systemd[72610]: Listening on D-Bus User Message Bus Socket.
Jan 22 08:33:33 np0005592159 systemd[72610]: Finished Create User's Volatile Files and Directories.
Jan 22 08:33:33 np0005592159 systemd[72610]: Reached target Sockets.
Jan 22 08:33:33 np0005592159 systemd[72610]: Reached target Basic System.
Jan 22 08:33:33 np0005592159 systemd[72610]: Reached target Main User Target.
Jan 22 08:33:33 np0005592159 systemd[72610]: Startup finished in 111ms.
Jan 22 08:33:33 np0005592159 systemd[1]: Started User Manager for UID 42477.
Jan 22 08:33:33 np0005592159 systemd[1]: Started Session 20 of User ceph-admin.
Jan 22 08:33:33 np0005592159 systemd-logind[787]: New session 22 of user ceph-admin.
Jan 22 08:33:33 np0005592159 systemd[1]: Started Session 22 of User ceph-admin.
Jan 22 08:33:33 np0005592159 systemd-logind[787]: New session 23 of user ceph-admin.
Jan 22 08:33:33 np0005592159 systemd[1]: Started Session 23 of User ceph-admin.
Jan 22 08:33:34 np0005592159 systemd-logind[787]: New session 24 of user ceph-admin.
Jan 22 08:33:34 np0005592159 systemd[1]: Started Session 24 of User ceph-admin.
Jan 22 08:33:34 np0005592159 systemd-logind[787]: New session 25 of user ceph-admin.
Jan 22 08:33:34 np0005592159 systemd[1]: Started Session 25 of User ceph-admin.
Jan 22 08:33:34 np0005592159 systemd-logind[787]: New session 26 of user ceph-admin.
Jan 22 08:33:34 np0005592159 systemd[1]: Started Session 26 of User ceph-admin.
Jan 22 08:33:35 np0005592159 systemd-logind[787]: New session 27 of user ceph-admin.
Jan 22 08:33:35 np0005592159 systemd[1]: Started Session 27 of User ceph-admin.
Jan 22 08:33:35 np0005592159 systemd-logind[787]: New session 28 of user ceph-admin.
Jan 22 08:33:35 np0005592159 systemd[1]: Started Session 28 of User ceph-admin.
Jan 22 08:33:36 np0005592159 systemd-logind[787]: New session 29 of user ceph-admin.
Jan 22 08:33:36 np0005592159 systemd[1]: Started Session 29 of User ceph-admin.
Jan 22 08:33:36 np0005592159 systemd-logind[787]: New session 30 of user ceph-admin.
Jan 22 08:33:36 np0005592159 systemd[1]: Started Session 30 of User ceph-admin.
Jan 22 08:33:37 np0005592159 systemd-logind[787]: New session 31 of user ceph-admin.
Jan 22 08:33:37 np0005592159 systemd[1]: Started Session 31 of User ceph-admin.
Jan 22 08:33:37 np0005592159 systemd-logind[787]: New session 32 of user ceph-admin.
Jan 22 08:33:37 np0005592159 systemd[1]: Started Session 32 of User ceph-admin.
Jan 22 08:33:37 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:36 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:37 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:37 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:37 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:37 np0005592159 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 73633 (sysctl)
Jan 22 08:34:38 np0005592159 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 22 08:34:38 np0005592159 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 22 08:34:38 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:39 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:34:43 np0005592159 systemd[1]: var-lib-containers-storage-overlay-compat1386398941-lower\x2dmapped.mount: Deactivated successfully.
Jan 22 08:35:16 np0005592159 podman[73910]: 2026-01-22 13:35:16.419786263 +0000 UTC m=+36.911080043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:16 np0005592159 podman[73910]: 2026-01-22 13:35:16.802841286 +0000 UTC m=+37.294135046 container create 858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 08:35:17 np0005592159 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 22 08:35:17 np0005592159 systemd[1]: Started libpod-conmon-858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4.scope.
Jan 22 08:35:17 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:35:17 np0005592159 podman[73910]: 2026-01-22 13:35:17.775524378 +0000 UTC m=+38.266818168 container init 858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 08:35:17 np0005592159 podman[73910]: 2026-01-22 13:35:17.783054284 +0000 UTC m=+38.274348084 container start 858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:35:17 np0005592159 gallant_kalam[73980]: 167 167
Jan 22 08:35:17 np0005592159 systemd[1]: libpod-858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4.scope: Deactivated successfully.
Jan 22 08:35:17 np0005592159 podman[73910]: 2026-01-22 13:35:17.948896878 +0000 UTC m=+38.440190668 container attach 858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 22 08:35:17 np0005592159 podman[73910]: 2026-01-22 13:35:17.949568745 +0000 UTC m=+38.440862525 container died 858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 08:35:18 np0005592159 systemd[1]: var-lib-containers-storage-overlay-3f5a7ef4872511dbce92abc0bd3d0bd2f6a1fed938990b49cced862a76caf8d8-merged.mount: Deactivated successfully.
Jan 22 08:35:18 np0005592159 podman[73910]: 2026-01-22 13:35:18.710811987 +0000 UTC m=+39.202105757 container remove 858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Jan 22 08:35:18 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:35:18 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:35:18 np0005592159 systemd[1]: libpod-conmon-858ea6d0ed93fbe72b213057f3892e7430542e87e897a5929c6a45606855d1f4.scope: Deactivated successfully.
Jan 22 08:35:18 np0005592159 podman[74003]: 2026-01-22 13:35:18.856802494 +0000 UTC m=+0.042427084 container create 28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:18 np0005592159 systemd[1]: Started libpod-conmon-28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f.scope.
Jan 22 08:35:18 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:35:18 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a48db2295397123c3951b3f86cc289f28156c04b273da95798f8c6f01aaf697e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:18 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a48db2295397123c3951b3f86cc289f28156c04b273da95798f8c6f01aaf697e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:18 np0005592159 podman[74003]: 2026-01-22 13:35:18.834581436 +0000 UTC m=+0.020206046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:18 np0005592159 podman[74003]: 2026-01-22 13:35:18.93505505 +0000 UTC m=+0.120679670 container init 28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 08:35:18 np0005592159 podman[74003]: 2026-01-22 13:35:18.941403785 +0000 UTC m=+0.127028375 container start 28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Jan 22 08:35:18 np0005592159 podman[74003]: 2026-01-22 13:35:18.945990344 +0000 UTC m=+0.131614964 container attach 28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:20 np0005592159 charming_davinci[74019]: [
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:    {
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:        "available": false,
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:        "ceph_device": false,
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:        "lsm_data": {},
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:        "lvs": [],
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:        "path": "/dev/sr0",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:        "rejected_reasons": [
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "Insufficient space (<5GB)",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "Has a FileSystem"
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:        ],
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:        "sys_api": {
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "actuators": null,
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "device_nodes": "sr0",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "devname": "sr0",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "human_readable_size": "482.00 KB",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "id_bus": "ata",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "model": "QEMU DVD-ROM",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "nr_requests": "2",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "parent": "/dev/sr0",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "partitions": {},
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "path": "/dev/sr0",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "removable": "1",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "rev": "2.5+",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "ro": "0",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "rotational": "1",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "sas_address": "",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "sas_device_handle": "",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "scheduler_mode": "mq-deadline",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "sectors": 0,
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "sectorsize": "2048",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "size": 493568.0,
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "support_discard": "2048",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "type": "disk",
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:            "vendor": "QEMU"
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:        }
Jan 22 08:35:20 np0005592159 charming_davinci[74019]:    }
Jan 22 08:35:20 np0005592159 charming_davinci[74019]: ]
Jan 22 08:35:20 np0005592159 systemd[1]: libpod-28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f.scope: Deactivated successfully.
Jan 22 08:35:20 np0005592159 systemd[1]: libpod-28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f.scope: Consumed 1.131s CPU time.
Jan 22 08:35:20 np0005592159 podman[74003]: 2026-01-22 13:35:20.071853791 +0000 UTC m=+1.257478391 container died 28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 08:35:20 np0005592159 systemd[1]: var-lib-containers-storage-overlay-a48db2295397123c3951b3f86cc289f28156c04b273da95798f8c6f01aaf697e-merged.mount: Deactivated successfully.
Jan 22 08:35:20 np0005592159 podman[74003]: 2026-01-22 13:35:20.448441216 +0000 UTC m=+1.634065806 container remove 28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_davinci, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 08:35:20 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:35:20 np0005592159 systemd[1]: libpod-conmon-28e511233189213b3c83015bc80a490135800989abf548eeb96f81dc83ae899f.scope: Deactivated successfully.
Jan 22 08:35:25 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:35:25 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:35:25 np0005592159 podman[76726]: 2026-01-22 13:35:25.96217771 +0000 UTC m=+0.045359261 container create 2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:25 np0005592159 systemd[1]: Started libpod-conmon-2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6.scope.
Jan 22 08:35:26 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:35:26 np0005592159 podman[76726]: 2026-01-22 13:35:26.030144878 +0000 UTC m=+0.113326469 container init 2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:26 np0005592159 podman[76726]: 2026-01-22 13:35:26.038491945 +0000 UTC m=+0.121673496 container start 2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:26 np0005592159 podman[76726]: 2026-01-22 13:35:25.94178901 +0000 UTC m=+0.024970581 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:26 np0005592159 great_golick[76742]: 167 167
Jan 22 08:35:26 np0005592159 podman[76726]: 2026-01-22 13:35:26.043234579 +0000 UTC m=+0.126416130 container attach 2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:26 np0005592159 systemd[1]: libpod-2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6.scope: Deactivated successfully.
Jan 22 08:35:26 np0005592159 conmon[76742]: conmon 2ed7f5a80cbdd333dc0b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6.scope/container/memory.events
Jan 22 08:35:26 np0005592159 podman[76748]: 2026-01-22 13:35:26.091824703 +0000 UTC m=+0.026211203 container died 2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 08:35:26 np0005592159 podman[76748]: 2026-01-22 13:35:26.129467472 +0000 UTC m=+0.063853952 container remove 2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Jan 22 08:35:26 np0005592159 systemd[1]: libpod-conmon-2ed7f5a80cbdd333dc0b1b34ccfb37d283c6e405bbb18ab5a629662f1b7098d6.scope: Deactivated successfully.
Jan 22 08:35:26 np0005592159 podman[76765]: 2026-01-22 13:35:26.21126803 +0000 UTC m=+0.044827088 container create 2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:35:26 np0005592159 systemd[1]: Started libpod-conmon-2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02.scope.
Jan 22 08:35:26 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:35:26 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856bad82ec86b68f615f351f4eaf7a9626951c02ae17e2742d1e824b8822e381/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:26 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856bad82ec86b68f615f351f4eaf7a9626951c02ae17e2742d1e824b8822e381/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:26 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856bad82ec86b68f615f351f4eaf7a9626951c02ae17e2742d1e824b8822e381/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:26 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/856bad82ec86b68f615f351f4eaf7a9626951c02ae17e2742d1e824b8822e381/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:26 np0005592159 podman[76765]: 2026-01-22 13:35:26.191347411 +0000 UTC m=+0.024906499 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:26 np0005592159 podman[76765]: 2026-01-22 13:35:26.289369621 +0000 UTC m=+0.122928689 container init 2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Jan 22 08:35:26 np0005592159 podman[76765]: 2026-01-22 13:35:26.297378059 +0000 UTC m=+0.130937117 container start 2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:35:26 np0005592159 podman[76765]: 2026-01-22 13:35:26.301401574 +0000 UTC m=+0.134960632 container attach 2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 22 08:35:27 np0005592159 systemd[1]: libpod-2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02.scope: Deactivated successfully.
Jan 22 08:35:27 np0005592159 podman[76765]: 2026-01-22 13:35:27.308958484 +0000 UTC m=+1.142517552 container died 2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Jan 22 08:35:27 np0005592159 systemd[1]: var-lib-containers-storage-overlay-856bad82ec86b68f615f351f4eaf7a9626951c02ae17e2742d1e824b8822e381-merged.mount: Deactivated successfully.
Jan 22 08:35:27 np0005592159 podman[76765]: 2026-01-22 13:35:27.380561775 +0000 UTC m=+1.214120833 container remove 2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_hofstadter, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True)
Jan 22 08:35:27 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:35:27 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:35:27 np0005592159 systemd[1]: libpod-conmon-2d4aae38d0fa4e0a0b2fd870d562e53e3bc1bb4f5bc3664dcb7e1edc3f9b5b02.scope: Deactivated successfully.
Jan 22 08:35:27 np0005592159 systemd[1]: Reloading.
Jan 22 08:35:27 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:35:27 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:35:27 np0005592159 systemd[1]: Reloading.
Jan 22 08:35:27 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:35:27 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:35:27 np0005592159 systemd[1]: Reached target All Ceph clusters and services.
Jan 22 08:35:27 np0005592159 systemd[1]: Reloading.
Jan 22 08:35:27 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:35:27 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:35:28 np0005592159 systemd[1]: Reached target Ceph cluster 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:35:28 np0005592159 systemd[1]: Reloading.
Jan 22 08:35:28 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:35:28 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:35:28 np0005592159 systemd[1]: Reloading.
Jan 22 08:35:28 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:35:28 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:35:28 np0005592159 systemd[1]: Created slice Slice /system/ceph-088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:35:28 np0005592159 systemd[1]: Reached target System Time Set.
Jan 22 08:35:28 np0005592159 systemd[1]: Reached target System Time Synchronized.
Jan 22 08:35:28 np0005592159 systemd[1]: Starting Ceph mon.compute-2 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:35:28 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:35:28 np0005592159 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 22 08:35:28 np0005592159 podman[77062]: 2026-01-22 13:35:28.868786757 +0000 UTC m=+0.037456426 container create ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 22 08:35:28 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6269f9312632c62e86d13c965ce5e4ccf9b1ba9a87f9e29364ed084fe61c1572/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:28 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6269f9312632c62e86d13c965ce5e4ccf9b1ba9a87f9e29364ed084fe61c1572/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:28 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6269f9312632c62e86d13c965ce5e4ccf9b1ba9a87f9e29364ed084fe61c1572/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:28 np0005592159 podman[77062]: 2026-01-22 13:35:28.925749809 +0000 UTC m=+0.094419508 container init ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 08:35:28 np0005592159 podman[77062]: 2026-01-22 13:35:28.93193866 +0000 UTC m=+0.100608329 container start ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 08:35:28 np0005592159 bash[77062]: ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6
Jan 22 08:35:28 np0005592159 podman[77062]: 2026-01-22 13:35:28.853253673 +0000 UTC m=+0.021923362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:28 np0005592159 systemd[1]: Started Ceph mon.compute-2 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: pidfile_write: ignore empty --pid-file
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: load: jerasure load: lrc 
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: RocksDB version: 7.9.2
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Git sha 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: DB SUMMARY
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: DB Session ID:  HOKNYZUMFPVI0T4U6KMU
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: CURRENT file:  CURRENT
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-2/store.db dir, Total Num: 0, files: 
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-2/store.db: 000004.log size: 511 ; 
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                         Options.error_if_exists: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                       Options.create_if_missing: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                                     Options.env: 0x55f4cd06bc40
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                                Options.info_log: 0x55f4cf3a0fc0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                              Options.statistics: (nil)
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                               Options.use_fsync: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                              Options.db_log_dir: 
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                                 Options.wal_dir: 
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                    Options.write_buffer_manager: 0x55f4cf3b0b40
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                  Options.unordered_write: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                               Options.row_cache: None
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                              Options.wal_filter: None
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.two_write_queues: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.wal_compression: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.atomic_flush: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.max_background_jobs: 2
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.max_background_compactions: -1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.max_subcompactions: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.max_total_wal_size: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                          Options.max_open_files: -1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:       Options.compaction_readahead_size: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Compression algorithms supported:
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: #011kZSTD supported: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: #011kXpressCompression supported: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: #011kBZip2Compression supported: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: #011kLZ4Compression supported: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: #011kZlibCompression supported: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: #011kSnappyCompression supported: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:           Options.merge_operator: 
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:        Options.compaction_filter: None
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4cf3a0c00)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4cf3991f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:        Options.write_buffer_size: 33554432
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:  Options.max_write_buffer_number: 2
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:          Options.compression: NoCompression
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.num_levels: 7
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 2fc6eab8-1992-4005-a2ff-000040659fe1
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088928983160, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088928986627, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088928986799, "job": 1, "event": "recovery_finished"}
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f4cf3c2e00
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: DB pointer 0x55f4cf44c000
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: mon.compute-2 does not exist in monmap, will attempt to join an existing cluster
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.11 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 512.00 MB usage: 0.86 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.64 KB,0.00012219%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: using public_addr v2:192.168.122.102:0/0 -> [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: starting mon.compute-2 rank -1 at public addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] at bind addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-2 fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 08:35:28 np0005592159 ceph-mon[77081]: mon.compute-2@-1(???) e0 preinit fsid 088fe176-0106-5401-803c-2da38b73b76a
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).mds e2 new map
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:35:18.163248+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e8 e8: 2 total, 0 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e9 e9: 2 total, 0 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e10 e10: 2 total, 1 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e11 e11: 2 total, 1 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e12 e12: 2 total, 1 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e24 e24: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e25 e25: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e26 e26: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e27 e27: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e28 e28: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e29 e29: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e30 e30: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e31 e31: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e32 e32: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e33 e33: 2 total, 2 up, 2 in
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e33 crush map has features 3314933000852226048, adjusting msgr requires
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e33 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e33 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).osd e33 crush map has features 288514051259236352, adjusting msgr requires
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Unable to set osd_memory_target on compute-0 to 134211993: error parsing value: Value '134211993' is below minimum 939524096
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/974439093' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/974439093' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/2472273245' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/2472273245' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/105373315' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/105373315' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/2816658728' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/2816658728' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/1671536897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/1671536897' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/2138351977' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/2138351977' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/1551997886' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/1551997886' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/1090994608' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/1090994608' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/3233251670' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/3233251670' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/677900918' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/677900918' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/1174767820' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/1174767820' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/3318117351' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/3318117351' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/1015326372' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/1015326372' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/2012634198' entity='client.admin' 
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Saving service ingress.rgw.default spec with placement count:2
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/4027153888' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/4027153888' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.client.admin.keyring
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Deploying daemon mon.compute-2 on compute-2
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 22 08:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@-1(synchronizing).paxosservice(auth 1..8) refresh upgraded, format 0 -> 3
Jan 22 08:35:31 np0005592159 ceph-mon[77081]: mon.compute-2@-1(probing) e2  my rank is now 1 (was -1)
Jan 22 08:35:31 np0005592159 ceph-mon[77081]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Jan 22 08:35:31 np0005592159 ceph-mon[77081]: paxos.1).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 22 08:35:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 08:35:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 08:35:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 08:35:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 22 08:35:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Jan 22 08:35:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 22 08:35:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:36 np0005592159 ceph-mon[77081]: mgrc update_daemon_metadata mon.compute-2 metadata {addrs=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,created_at=2026-01-22T13:35:26.337912Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-2,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Jan 22 08:35:36 np0005592159 ceph-mon[77081]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Jan 22 08:35:36 np0005592159 ceph-mon[77081]: paxos.1).electionLogic(10) init, last seen epoch 10
Jan 22 08:35:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 22 08:35:41 np0005592159 ceph-mon[77081]: mon.compute-0 calling monitor election
Jan 22 08:35:41 np0005592159 ceph-mon[77081]: mon.compute-2 calling monitor election
Jan 22 08:35:41 np0005592159 ceph-mon[77081]: mon.compute-1 calling monitor election
Jan 22 08:35:41 np0005592159 ceph-mon[77081]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 22 08:35:41 np0005592159 ceph-mon[77081]: Health detail: HEALTH_ERR 1 filesystem is offline; 1 filesystem is online with fewer MDS than max_mds
Jan 22 08:35:41 np0005592159 ceph-mon[77081]: [ERR] MDS_ALL_DOWN: 1 filesystem is offline
Jan 22 08:35:41 np0005592159 ceph-mon[77081]:    fs cephfs is offline because no MDS is active for it.
Jan 22 08:35:41 np0005592159 ceph-mon[77081]: [WRN] MDS_UP_LESS_THAN_MAX: 1 filesystem is online with fewer MDS than max_mds
Jan 22 08:35:41 np0005592159 ceph-mon[77081]:    fs cephfs has 0 MDS online, but wants 1
Jan 22 08:35:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.tjdsdx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 08:35:42 np0005592159 podman[77258]: 2026-01-22 13:35:42.203787988 +0000 UTC m=+0.042110506 container create 3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 08:35:42 np0005592159 systemd[72610]: Starting Mark boot as successful...
Jan 22 08:35:42 np0005592159 systemd[72610]: Finished Mark boot as successful.
Jan 22 08:35:42 np0005592159 systemd[1]: Started libpod-conmon-3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc.scope.
Jan 22 08:35:42 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:35:42 np0005592159 podman[77258]: 2026-01-22 13:35:42.260303988 +0000 UTC m=+0.098626526 container init 3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:35:42 np0005592159 podman[77258]: 2026-01-22 13:35:42.266223612 +0000 UTC m=+0.104546120 container start 3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 08:35:42 np0005592159 podman[77258]: 2026-01-22 13:35:42.269794085 +0000 UTC m=+0.108116633 container attach 3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 08:35:42 np0005592159 youthful_ardinghelli[77275]: 167 167
Jan 22 08:35:42 np0005592159 systemd[1]: libpod-3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc.scope: Deactivated successfully.
Jan 22 08:35:42 np0005592159 conmon[77275]: conmon 3fe5112f81d0e2300ada <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc.scope/container/memory.events
Jan 22 08:35:42 np0005592159 podman[77258]: 2026-01-22 13:35:42.273509812 +0000 UTC m=+0.111832330 container died 3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 08:35:42 np0005592159 podman[77258]: 2026-01-22 13:35:42.18423759 +0000 UTC m=+0.022560128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:42 np0005592159 systemd[1]: var-lib-containers-storage-overlay-7eebce0a94d56e6df0ec4848887b03614267c4d8b406ffe978cca2ec168a88d9-merged.mount: Deactivated successfully.
Jan 22 08:35:42 np0005592159 podman[77258]: 2026-01-22 13:35:42.316131671 +0000 UTC m=+0.154454219 container remove 3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_ardinghelli, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:35:42 np0005592159 systemd[1]: libpod-conmon-3fe5112f81d0e2300ada80980c204c68a728c0dc39a8ce606d51d4483e0006fc.scope: Deactivated successfully.
Jan 22 08:35:42 np0005592159 systemd[1]: Reloading.
Jan 22 08:35:42 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:35:42 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:35:42 np0005592159 systemd[1]: Reloading.
Jan 22 08:35:42 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:35:42 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:35:42 np0005592159 systemd[1]: Starting Ceph mgr.compute-2.tjdsdx for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:35:43 np0005592159 podman[77418]: 2026-01-22 13:35:43.074661092 +0000 UTC m=+0.039503689 container create 3f48eeed4688717dc1b70b826cbb76219abc8f1d02edfa4f514b989747c1506f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 08:35:43 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaba1a4c1446d779d6c3516cfd324aad6d83d7c423cfe84d48f1bb4f78328aa6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:43 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaba1a4c1446d779d6c3516cfd324aad6d83d7c423cfe84d48f1bb4f78328aa6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:43 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaba1a4c1446d779d6c3516cfd324aad6d83d7c423cfe84d48f1bb4f78328aa6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:43 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaba1a4c1446d779d6c3516cfd324aad6d83d7c423cfe84d48f1bb4f78328aa6/merged/var/lib/ceph/mgr/ceph-compute-2.tjdsdx supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:43 np0005592159 podman[77418]: 2026-01-22 13:35:43.14418109 +0000 UTC m=+0.109023707 container init 3f48eeed4688717dc1b70b826cbb76219abc8f1d02edfa4f514b989747c1506f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:43 np0005592159 podman[77418]: 2026-01-22 13:35:43.150121805 +0000 UTC m=+0.114964402 container start 3f48eeed4688717dc1b70b826cbb76219abc8f1d02edfa4f514b989747c1506f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:43 np0005592159 podman[77418]: 2026-01-22 13:35:43.055986626 +0000 UTC m=+0.020829243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:43 np0005592159 bash[77418]: 3f48eeed4688717dc1b70b826cbb76219abc8f1d02edfa4f514b989747c1506f
Jan 22 08:35:43 np0005592159 systemd[1]: Started Ceph mgr.compute-2.tjdsdx for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:35:43 np0005592159 ceph-mgr[77438]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:35:43 np0005592159 ceph-mgr[77438]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Jan 22 08:35:43 np0005592159 ceph-mgr[77438]: pidfile_write: ignore empty --pid-file
Jan 22 08:35:43 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'alerts'
Jan 22 08:35:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.tjdsdx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 08:35:43 np0005592159 ceph-mon[77081]: Deploying daemon mgr.compute-2.tjdsdx on compute-2
Jan 22 08:35:43 np0005592159 ceph-mgr[77438]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 08:35:43 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'balancer'
Jan 22 08:35:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:43.606+0000 7f5297bb2140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 22 08:35:43 np0005592159 ceph-mgr[77438]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 08:35:43 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'cephadm'
Jan 22 08:35:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:43.867+0000 7f5297bb2140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 22 08:35:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e33 _set_new_cache_sizes cache_size:1019920026 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hzmatt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 08:35:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.hzmatt", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 22 08:35:44 np0005592159 ceph-mon[77081]: Deploying daemon mgr.compute-1.hzmatt on compute-1
Jan 22 08:35:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:35:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 22 08:35:45 np0005592159 ceph-mon[77081]: Deploying daemon crash.compute-2 on compute-2
Jan 22 08:35:45 np0005592159 podman[77615]: 2026-01-22 13:35:45.956788162 +0000 UTC m=+0.040673939 container create ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:35:45 np0005592159 systemd[1]: Started libpod-conmon-ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916.scope.
Jan 22 08:35:46 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:35:46 np0005592159 podman[77615]: 2026-01-22 13:35:45.938795294 +0000 UTC m=+0.022680991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:46 np0005592159 podman[77615]: 2026-01-22 13:35:46.033435835 +0000 UTC m=+0.117321542 container init ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:35:46 np0005592159 podman[77615]: 2026-01-22 13:35:46.048773614 +0000 UTC m=+0.132659301 container start ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:46 np0005592159 podman[77615]: 2026-01-22 13:35:46.052874051 +0000 UTC m=+0.136759728 container attach ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:35:46 np0005592159 modest_newton[77632]: 167 167
Jan 22 08:35:46 np0005592159 systemd[1]: libpod-ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916.scope: Deactivated successfully.
Jan 22 08:35:46 np0005592159 podman[77615]: 2026-01-22 13:35:46.056039663 +0000 UTC m=+0.139925340 container died ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:35:46 np0005592159 systemd[1]: var-lib-containers-storage-overlay-38c8e96a49854486ebe6ba9a274bca202e83b1bafd2c285b12434ac64efb1189-merged.mount: Deactivated successfully.
Jan 22 08:35:46 np0005592159 podman[77615]: 2026-01-22 13:35:46.101832175 +0000 UTC m=+0.185717852 container remove ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_newton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Jan 22 08:35:46 np0005592159 systemd[1]: libpod-conmon-ffab930dbd7ea72b7796f5e4c56e15022b196664225984153988a96830d10916.scope: Deactivated successfully.
Jan 22 08:35:46 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'crash'
Jan 22 08:35:46 np0005592159 systemd[1]: Reloading.
Jan 22 08:35:46 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:35:46 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:35:46 np0005592159 systemd[1]: Reloading.
Jan 22 08:35:46 np0005592159 ceph-mgr[77438]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 08:35:46 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'dashboard'
Jan 22 08:35:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:46.513+0000 7f5297bb2140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 22 08:35:46 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:35:46 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:35:48 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'devicehealth'
Jan 22 08:35:48 np0005592159 ceph-mgr[77438]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 08:35:48 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'diskprediction_local'
Jan 22 08:35:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:48.422+0000 7f5297bb2140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 22 08:35:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 22 08:35:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 22 08:35:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]:  from numpy import show_config as show_numpy_config
Jan 22 08:35:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:48.963+0000 7f5297bb2140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 08:35:48 np0005592159 ceph-mgr[77438]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 22 08:35:48 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'influx'
Jan 22 08:35:49 np0005592159 ceph-mgr[77438]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 08:35:49 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'insights'
Jan 22 08:35:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:49.220+0000 7f5297bb2140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 22 08:35:49 np0005592159 systemd[1]: Starting Ceph crash.compute-2 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:35:49 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'iostat'
Jan 22 08:35:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e33 _set_new_cache_sizes cache_size:1020052989 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:49 np0005592159 podman[77776]: 2026-01-22 13:35:49.598552512 +0000 UTC m=+0.025223037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:49 np0005592159 ceph-mgr[77438]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 08:35:49 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'k8sevents'
Jan 22 08:35:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:49.743+0000 7f5297bb2140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 22 08:35:49 np0005592159 podman[77776]: 2026-01-22 13:35:49.769645712 +0000 UTC m=+0.196316217 container create 52f09a99f1b294dc32194bfc1ab7f2d1320bd9205c0632fb77a4b4dfb25dbf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 08:35:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e34 e34: 2 total, 2 up, 2 in
Jan 22 08:35:50 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8245524d960b7a932b934d051adb52667e2f74f47a73b2cef671a61a33d93cae/merged/etc/ceph/ceph.client.crash.compute-2.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:50 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8245524d960b7a932b934d051adb52667e2f74f47a73b2cef671a61a33d93cae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:50 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8245524d960b7a932b934d051adb52667e2f74f47a73b2cef671a61a33d93cae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:50 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8245524d960b7a932b934d051adb52667e2f74f47a73b2cef671a61a33d93cae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:51 np0005592159 podman[77776]: 2026-01-22 13:35:51.373093582 +0000 UTC m=+1.799764177 container init 52f09a99f1b294dc32194bfc1ab7f2d1320bd9205c0632fb77a4b4dfb25dbf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:51 np0005592159 podman[77776]: 2026-01-22 13:35:51.383750129 +0000 UTC m=+1.810420654 container start 52f09a99f1b294dc32194bfc1ab7f2d1320bd9205c0632fb77a4b4dfb25dbf93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Jan 22 08:35:51 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'localpool'
Jan 22 08:35:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 22 08:35:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.830+0000 7fd4e3898640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 22 08:35:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.830+0000 7fd4e3898640 -1 AuthRegistry(0x7fd4dc067150) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 22 08:35:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.831+0000 7fd4e3898640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 22 08:35:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.831+0000 7fd4e3898640 -1 AuthRegistry(0x7fd4e3897000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 22 08:35:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.833+0000 7fd4e0e0c640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 22 08:35:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.834+0000 7fd4e160d640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 22 08:35:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.834+0000 7fd4e1e0e640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 22 08:35:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: 2026-01-22T13:35:51.834+0000 7fd4e3898640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 22 08:35:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 22 08:35:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-crash-compute-2[77792]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 22 08:35:51 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'mds_autoscaler'
Jan 22 08:35:52 np0005592159 bash[77776]: 52f09a99f1b294dc32194bfc1ab7f2d1320bd9205c0632fb77a4b4dfb25dbf93
Jan 22 08:35:52 np0005592159 systemd[1]: Started Ceph crash.compute-2 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:35:52 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'mirroring'
Jan 22 08:35:52 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'nfs'
Jan 22 08:35:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e35 e35: 2 total, 2 up, 2 in
Jan 22 08:35:53 np0005592159 ceph-mgr[77438]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 08:35:53 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'orchestrator'
Jan 22 08:35:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:53.637+0000 7f5297bb2140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 22 08:35:53 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/777136089' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 22 08:35:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:54 np0005592159 ceph-mgr[77438]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 08:35:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:54.371+0000 7f5297bb2140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 22 08:35:54 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'osd_perf_query'
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e36 e36: 2 total, 2 up, 2 in
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e36 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:54 np0005592159 ceph-mgr[77438]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 08:35:54 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'osd_support'
Jan 22 08:35:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:54.662+0000 7f5297bb2140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 22 08:35:54 np0005592159 podman[77949]: 2026-01-22 13:35:54.76915893 +0000 UTC m=+0.084786276 container create 7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Jan 22 08:35:54 np0005592159 podman[77949]: 2026-01-22 13:35:54.70919932 +0000 UTC m=+0.024826626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:54 np0005592159 systemd[1]: Started libpod-conmon-7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37.scope.
Jan 22 08:35:54 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:35:54 np0005592159 podman[77949]: 2026-01-22 13:35:54.880101486 +0000 UTC m=+0.195728832 container init 7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 08:35:54 np0005592159 podman[77949]: 2026-01-22 13:35:54.891423251 +0000 UTC m=+0.207050557 container start 7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:35:54 np0005592159 podman[77949]: 2026-01-22 13:35:54.895505387 +0000 UTC m=+0.211132723 container attach 7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 08:35:54 np0005592159 vigilant_shtern[77965]: 167 167
Jan 22 08:35:54 np0005592159 systemd[1]: libpod-7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37.scope: Deactivated successfully.
Jan 22 08:35:54 np0005592159 conmon[77965]: conmon 7e4d8b2310bffc106ce1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37.scope/container/memory.events
Jan 22 08:35:54 np0005592159 podman[77949]: 2026-01-22 13:35:54.899004618 +0000 UTC m=+0.214631944 container died 7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:35:54 np0005592159 ceph-mgr[77438]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 08:35:54 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'pg_autoscaler'
Jan 22 08:35:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:54.912+0000 7f5297bb2140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 22 08:35:54 np0005592159 systemd[1]: var-lib-containers-storage-overlay-785d90322996e245c03947f9d39acce835d63dc7f2cc0f2fa8e00e0da535402b-merged.mount: Deactivated successfully.
Jan 22 08:35:54 np0005592159 podman[77949]: 2026-01-22 13:35:54.964609614 +0000 UTC m=+0.280236920 container remove 7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_shtern, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 22 08:35:54 np0005592159 systemd[1]: libpod-conmon-7e4d8b2310bffc106ce1ad7c638d373236c03d8727f4662a8613878b799d9f37.scope: Deactivated successfully.
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:35:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:35:55 np0005592159 podman[77988]: 2026-01-22 13:35:55.125494159 +0000 UTC m=+0.043908113 container create 242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 08:35:55 np0005592159 systemd[1]: Started libpod-conmon-242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa.scope.
Jan 22 08:35:55 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:35:55 np0005592159 podman[77988]: 2026-01-22 13:35:55.104484453 +0000 UTC m=+0.022898437 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:35:55 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b611d3a482bdb3635903c618afd0059c276b81f002cb4050fd90e9090848b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:55 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b611d3a482bdb3635903c618afd0059c276b81f002cb4050fd90e9090848b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:55 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b611d3a482bdb3635903c618afd0059c276b81f002cb4050fd90e9090848b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:55 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b611d3a482bdb3635903c618afd0059c276b81f002cb4050fd90e9090848b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:55 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58b611d3a482bdb3635903c618afd0059c276b81f002cb4050fd90e9090848b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 22 08:35:55 np0005592159 podman[77988]: 2026-01-22 13:35:55.234173706 +0000 UTC m=+0.152587670 container init 242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_montalcini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Jan 22 08:35:55 np0005592159 podman[77988]: 2026-01-22 13:35:55.244006312 +0000 UTC m=+0.162420266 container start 242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_montalcini, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:35:55 np0005592159 ceph-mgr[77438]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 08:35:55 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'progress'
Jan 22 08:35:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:55.243+0000 7f5297bb2140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 22 08:35:55 np0005592159 podman[77988]: 2026-01-22 13:35:55.248705324 +0000 UTC m=+0.167119278 container attach 242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:35:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e37 e37: 2 total, 2 up, 2 in
Jan 22 08:35:55 np0005592159 ceph-mgr[77438]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 08:35:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:55.558+0000 7f5297bb2140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 22 08:35:55 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'prometheus'
Jan 22 08:35:56 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 22 08:35:56 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:35:56 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 22 08:35:56 np0005592159 mystifying_montalcini[78005]: --> passed data devices: 0 physical, 1 LVM
Jan 22 08:35:56 np0005592159 mystifying_montalcini[78005]: --> relative data size: 1.0
Jan 22 08:35:56 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 08:35:56 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 3569f689-49d4-4dc0-921b-9570c720a1f3
Jan 22 08:35:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e38 e38: 2 total, 2 up, 2 in
Jan 22 08:35:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"} v 0) v1
Jan 22 08:35:56 np0005592159 ceph-mon[77081]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/3979291260' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]: dispatch
Jan 22 08:35:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e39 e39: 3 total, 2 up, 3 in
Jan 22 08:35:56 np0005592159 ceph-mgr[77438]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 08:35:56 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'rbd_support'
Jan 22 08:35:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:56.734+0000 7f5297bb2140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 22 08:35:56 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 22 08:35:56 np0005592159 lvm[78052]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:35:56 np0005592159 lvm[78052]: VG ceph_vg0 finished
Jan 22 08:35:56 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 22 08:35:56 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 22 08:35:56 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 08:35:56 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Jan 22 08:35:56 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 22 08:35:57 np0005592159 ceph-mgr[77438]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 08:35:57 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'restful'
Jan 22 08:35:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:57.068+0000 7f5297bb2140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 22 08:35:57 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Jan 22 08:35:57 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2302690487' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 22 08:35:57 np0005592159 mystifying_montalcini[78005]: stderr: got monmap epoch 3
Jan 22 08:35:57 np0005592159 mystifying_montalcini[78005]: --> Creating keyring file for osd.2
Jan 22 08:35:57 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.102:0/3979291260' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]: dispatch
Jan 22 08:35:57 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]: dispatch
Jan 22 08:35:57 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3"}]': finished
Jan 22 08:35:57 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 22 08:35:57 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 22 08:35:57 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 3569f689-49d4-4dc0-921b-9570c720a1f3 --setuser ceph --setgroup ceph
Jan 22 08:35:57 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'rgw'
Jan 22 08:35:58 np0005592159 ceph-mgr[77438]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 08:35:58 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'rook'
Jan 22 08:35:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:35:58.574+0000 7f5297bb2140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 22 08:35:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:35:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e40 e40: 3 total, 2 up, 3 in
Jan 22 08:35:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:35:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:36:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e41 e41: 3 total, 2 up, 3 in
Jan 22 08:36:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:01 np0005592159 ceph-mgr[77438]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 08:36:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:01.205+0000 7f5297bb2140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 22 08:36:01 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'selftest'
Jan 22 08:36:01 np0005592159 ceph-mgr[77438]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 08:36:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:01.479+0000 7f5297bb2140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 22 08:36:01 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'snap_schedule'
Jan 22 08:36:01 np0005592159 mystifying_montalcini[78005]: stderr: 2026-01-22T13:35:57.336+0000 7f5f7f9b0740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 08:36:01 np0005592159 mystifying_montalcini[78005]: stderr: 2026-01-22T13:35:57.336+0000 7f5f7f9b0740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 08:36:01 np0005592159 mystifying_montalcini[78005]: stderr: 2026-01-22T13:35:57.336+0000 7f5f7f9b0740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Jan 22 08:36:01 np0005592159 mystifying_montalcini[78005]: stderr: 2026-01-22T13:35:57.337+0000 7f5f7f9b0740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 22 08:36:01 np0005592159 mystifying_montalcini[78005]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 22 08:36:01 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 22 08:36:01 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 22 08:36:01 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Jan 22 08:36:01 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 22 08:36:01 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 08:36:01 np0005592159 mystifying_montalcini[78005]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 22 08:36:01 np0005592159 ceph-mgr[77438]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 08:36:01 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'stats'
Jan 22 08:36:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:01.757+0000 7f5297bb2140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 22 08:36:01 np0005592159 mystifying_montalcini[78005]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 22 08:36:01 np0005592159 mystifying_montalcini[78005]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 22 08:36:01 np0005592159 systemd[1]: libpod-242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa.scope: Deactivated successfully.
Jan 22 08:36:01 np0005592159 systemd[1]: libpod-242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa.scope: Consumed 2.649s CPU time.
Jan 22 08:36:01 np0005592159 podman[77988]: 2026-01-22 13:36:01.803865137 +0000 UTC m=+6.722279101 container died 242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_montalcini, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:36:01 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'status'
Jan 22 08:36:02 np0005592159 ceph-mgr[77438]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 08:36:02 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'telegraf'
Jan 22 08:36:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:02.271+0000 7f5297bb2140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 22 08:36:02 np0005592159 ceph-mgr[77438]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 08:36:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:02.522+0000 7f5297bb2140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 22 08:36:02 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'telemetry'
Jan 22 08:36:03 np0005592159 ceph-mgr[77438]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 08:36:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:03.193+0000 7f5297bb2140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 22 08:36:03 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'test_orchestrator'
Jan 22 08:36:03 np0005592159 ceph-mgr[77438]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 08:36:03 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'volumes'
Jan 22 08:36:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:03.904+0000 7f5297bb2140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 22 08:36:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:04 np0005592159 ceph-mgr[77438]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 08:36:04 np0005592159 ceph-mgr[77438]: mgr[py] Loading python module 'zabbix'
Jan 22 08:36:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:04.700+0000 7f5297bb2140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 22 08:36:04 np0005592159 ceph-mgr[77438]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 08:36:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mgr-compute-2-tjdsdx[77434]: 2026-01-22T13:36:04.940+0000 7f5297bb2140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 22 08:36:04 np0005592159 ceph-mgr[77438]: ms_deliver_dispatch: unhandled message 0x562f1e9fb600 mon_map magic: 0 v1 from mon.1 v2:192.168.122.102:3300/0
Jan 22 08:36:04 np0005592159 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 08:36:05 np0005592159 systemd[1]: var-lib-containers-storage-overlay-58b611d3a482bdb3635903c618afd0059c276b81f002cb4050fd90e9090848b6-merged.mount: Deactivated successfully.
Jan 22 08:36:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:05 np0005592159 systemd[1]: session-19.scope: Deactivated successfully.
Jan 22 08:36:05 np0005592159 systemd[1]: session-19.scope: Consumed 8.980s CPU time.
Jan 22 08:36:05 np0005592159 systemd-logind[787]: Session 19 logged out. Waiting for processes to exit.
Jan 22 08:36:05 np0005592159 systemd-logind[787]: Removed session 19.
Jan 22 08:36:05 np0005592159 podman[77988]: 2026-01-22 13:36:05.197497783 +0000 UTC m=+10.115911727 container remove 242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_montalcini, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:36:05 np0005592159 systemd[1]: libpod-conmon-242fc68dbfad2cd16e43ee1d4aaf4903d6db2802acdf02a752f00944bac952fa.scope: Deactivated successfully.
Jan 22 08:36:05 np0005592159 podman[79125]: 2026-01-22 13:36:05.833651401 +0000 UTC m=+0.024165720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:05 np0005592159 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 08:36:06 np0005592159 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 08:36:07 np0005592159 podman[79125]: 2026-01-22 13:36:07.443617539 +0000 UTC m=+1.634131858 container create fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 22 08:36:07 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e42 e42: 3 total, 2 up, 3 in
Jan 22 08:36:07 np0005592159 systemd[1]: Started libpod-conmon-fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c.scope.
Jan 22 08:36:07 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:36:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 08:36:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:07 np0005592159 podman[79125]: 2026-01-22 13:36:07.575068058 +0000 UTC m=+1.765582387 container init fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 08:36:07 np0005592159 podman[79125]: 2026-01-22 13:36:07.587379518 +0000 UTC m=+1.777893817 container start fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Jan 22 08:36:07 np0005592159 podman[79125]: 2026-01-22 13:36:07.592954343 +0000 UTC m=+1.783468642 container attach fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 22 08:36:07 np0005592159 ecstatic_bohr[79141]: 167 167
Jan 22 08:36:07 np0005592159 systemd[1]: libpod-fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c.scope: Deactivated successfully.
Jan 22 08:36:07 np0005592159 podman[79125]: 2026-01-22 13:36:07.597911502 +0000 UTC m=+1.788425801 container died fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:36:07 np0005592159 systemd[1]: var-lib-containers-storage-overlay-92d4f3fa2503645788d8324f74855da5b2e15f6d7de7371668c4420bc6df12fb-merged.mount: Deactivated successfully.
Jan 22 08:36:07 np0005592159 podman[79125]: 2026-01-22 13:36:07.647220945 +0000 UTC m=+1.837735244 container remove fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_bohr, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Jan 22 08:36:07 np0005592159 systemd[1]: libpod-conmon-fceb0c93b1858e21d57487d7d7eb459d27a7cdb3f371761a027292b66fab9a1c.scope: Deactivated successfully.
Jan 22 08:36:07 np0005592159 podman[79164]: 2026-01-22 13:36:07.851711774 +0000 UTC m=+0.074734125 container create 06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 08:36:07 np0005592159 systemd[1]: Started libpod-conmon-06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910.scope.
Jan 22 08:36:07 np0005592159 podman[79164]: 2026-01-22 13:36:07.809592989 +0000 UTC m=+0.032615370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:07 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:36:07 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bb96010b43355990352523f059fd666929cb39eb429f5b382d8648321d84e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:07 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bb96010b43355990352523f059fd666929cb39eb429f5b382d8648321d84e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:07 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bb96010b43355990352523f059fd666929cb39eb429f5b382d8648321d84e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:07 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51bb96010b43355990352523f059fd666929cb39eb429f5b382d8648321d84e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:10 np0005592159 podman[79164]: 2026-01-22 13:36:10.892179814 +0000 UTC m=+3.115202185 container init 06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:36:10 np0005592159 podman[79164]: 2026-01-22 13:36:10.905463456 +0000 UTC m=+3.128485837 container start 06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:36:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:11 np0005592159 podman[79164]: 2026-01-22 13:36:11.176085571 +0000 UTC m=+3.399107972 container attach 06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:36:11 np0005592159 eager_saha[79181]: {
Jan 22 08:36:11 np0005592159 eager_saha[79181]:    "2": [
Jan 22 08:36:11 np0005592159 eager_saha[79181]:        {
Jan 22 08:36:11 np0005592159 eager_saha[79181]:            "devices": [
Jan 22 08:36:11 np0005592159 eager_saha[79181]:                "/dev/loop3"
Jan 22 08:36:11 np0005592159 eager_saha[79181]:            ],
Jan 22 08:36:11 np0005592159 eager_saha[79181]:            "lv_name": "ceph_lv0",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:            "lv_size": "7511998464",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=jEocwv-ccRD-GQ8s-06tX-i7z2-rzc0-cFSAk3,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=088fe176-0106-5401-803c-2da38b73b76a,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=3569f689-49d4-4dc0-921b-9570c720a1f3,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:            "lv_uuid": "jEocwv-ccRD-GQ8s-06tX-i7z2-rzc0-cFSAk3",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:            "name": "ceph_lv0",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:            "path": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:            "tags": {
Jan 22 08:36:11 np0005592159 eager_saha[79181]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:                "ceph.block_uuid": "jEocwv-ccRD-GQ8s-06tX-i7z2-rzc0-cFSAk3",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:                "ceph.cephx_lockbox_secret": "",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:                "ceph.cluster_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:                "ceph.cluster_name": "ceph",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:                "ceph.crush_device_class": "",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:                "ceph.encrypted": "0",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:                "ceph.osd_fsid": "3569f689-49d4-4dc0-921b-9570c720a1f3",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:                "ceph.osd_id": "2",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:                "ceph.osdspec_affinity": "default_drive_group",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:                "ceph.type": "block",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:                "ceph.vdo": "0"
Jan 22 08:36:11 np0005592159 eager_saha[79181]:            },
Jan 22 08:36:11 np0005592159 eager_saha[79181]:            "type": "block",
Jan 22 08:36:11 np0005592159 eager_saha[79181]:            "vg_name": "ceph_vg0"
Jan 22 08:36:11 np0005592159 eager_saha[79181]:        }
Jan 22 08:36:11 np0005592159 eager_saha[79181]:    ]
Jan 22 08:36:11 np0005592159 eager_saha[79181]: }
Jan 22 08:36:11 np0005592159 systemd[1]: libpod-06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910.scope: Deactivated successfully.
Jan 22 08:36:11 np0005592159 podman[79164]: 2026-01-22 13:36:11.706580086 +0000 UTC m=+3.929602437 container died 06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:36:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e43 e43: 3 total, 2 up, 3 in
Jan 22 08:36:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 08:36:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:36:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 08:36:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:11 np0005592159 systemd[1]: var-lib-containers-storage-overlay-51bb96010b43355990352523f059fd666929cb39eb429f5b382d8648321d84e8-merged.mount: Deactivated successfully.
Jan 22 08:36:11 np0005592159 podman[79164]: 2026-01-22 13:36:11.863820229 +0000 UTC m=+4.086842570 container remove 06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_saha, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:36:11 np0005592159 systemd[1]: libpod-conmon-06be06e0502f9a9a5026a31d7d942ecc65949947851c4ee4820d203a016ef910.scope: Deactivated successfully.
Jan 22 08:36:12 np0005592159 podman[79342]: 2026-01-22 13:36:12.556592782 +0000 UTC m=+0.045387103 container create 69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Jan 22 08:36:12 np0005592159 systemd[1]: Started libpod-conmon-69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1.scope.
Jan 22 08:36:12 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:36:12 np0005592159 podman[79342]: 2026-01-22 13:36:12.537138276 +0000 UTC m=+0.025932627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:12 np0005592159 podman[79342]: 2026-01-22 13:36:12.635709206 +0000 UTC m=+0.124503527 container init 69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:36:12 np0005592159 podman[79342]: 2026-01-22 13:36:12.643096882 +0000 UTC m=+0.131891203 container start 69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_diffie, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:36:12 np0005592159 podman[79342]: 2026-01-22 13:36:12.64682556 +0000 UTC m=+0.135619881 container attach 69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:36:12 np0005592159 crazy_diffie[79357]: 167 167
Jan 22 08:36:12 np0005592159 systemd[1]: libpod-69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1.scope: Deactivated successfully.
Jan 22 08:36:12 np0005592159 conmon[79357]: conmon 69294cacae79399a349d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1.scope/container/memory.events
Jan 22 08:36:12 np0005592159 podman[79342]: 2026-01-22 13:36:12.649005858 +0000 UTC m=+0.137800179 container died 69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_diffie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:36:12 np0005592159 systemd[1]: var-lib-containers-storage-overlay-e6371cccffdc24f50375a2d375fe09c5e3fd4ae0a7c9f135f0930506b265e20b-merged.mount: Deactivated successfully.
Jan 22 08:36:12 np0005592159 podman[79342]: 2026-01-22 13:36:12.688628577 +0000 UTC m=+0.177422898 container remove 69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_diffie, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Jan 22 08:36:12 np0005592159 systemd[1]: libpod-conmon-69294cacae79399a349dd469d0840ed40164d3285db78b739f2a585bd7dabcf1.scope: Deactivated successfully.
Jan 22 08:36:13 np0005592159 podman[79391]: 2026-01-22 13:36:13.623194601 +0000 UTC m=+0.024312494 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 08:36:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:36:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 22 08:36:13 np0005592159 ceph-mon[77081]: Deploying daemon osd.2 on compute-2
Jan 22 08:36:13 np0005592159 podman[79391]: 2026-01-22 13:36:13.871663599 +0000 UTC m=+0.272781482 container create 602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:36:14 np0005592159 systemd[1]: Started libpod-conmon-602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd.scope.
Jan 22 08:36:14 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:36:14 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657b660ed0e5e9ae2187a091f4b1f5c080d5b939caf64add6b232b2eb0069aea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:14 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657b660ed0e5e9ae2187a091f4b1f5c080d5b939caf64add6b232b2eb0069aea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:14 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657b660ed0e5e9ae2187a091f4b1f5c080d5b939caf64add6b232b2eb0069aea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:14 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657b660ed0e5e9ae2187a091f4b1f5c080d5b939caf64add6b232b2eb0069aea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:14 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657b660ed0e5e9ae2187a091f4b1f5c080d5b939caf64add6b232b2eb0069aea/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:14 np0005592159 podman[79391]: 2026-01-22 13:36:14.490604556 +0000 UTC m=+0.891722459 container init 602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 08:36:14 np0005592159 podman[79391]: 2026-01-22 13:36:14.49907106 +0000 UTC m=+0.900188943 container start 602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:36:14 np0005592159 podman[79391]: 2026-01-22 13:36:14.657731881 +0000 UTC m=+1.058849894 container attach 602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:36:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test[79408]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Jan 22 08:36:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test[79408]:                            [--no-systemd] [--no-tmpfs]
Jan 22 08:36:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test[79408]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 22 08:36:15 np0005592159 podman[79391]: 2026-01-22 13:36:15.178225201 +0000 UTC m=+1.579343084 container died 602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:36:15 np0005592159 systemd[1]: libpod-602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd.scope: Deactivated successfully.
Jan 22 08:36:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:16 np0005592159 systemd[1]: var-lib-containers-storage-overlay-657b660ed0e5e9ae2187a091f4b1f5c080d5b939caf64add6b232b2eb0069aea-merged.mount: Deactivated successfully.
Jan 22 08:36:16 np0005592159 podman[79391]: 2026-01-22 13:36:16.998202247 +0000 UTC m=+3.399320170 container remove 602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:36:17 np0005592159 systemd[1]: libpod-conmon-602a494059e2cacd902b329c26fdb7799db8ca01230f76e73f85556f96f98dfd.scope: Deactivated successfully.
Jan 22 08:36:17 np0005592159 systemd[1]: Reloading.
Jan 22 08:36:17 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:36:17 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:36:18 np0005592159 systemd[1]: Reloading.
Jan 22 08:36:18 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:36:18 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:36:18 np0005592159 systemd[1]: Starting Ceph osd.2 for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:36:18 np0005592159 podman[79571]: 2026-01-22 13:36:18.605062879 +0000 UTC m=+0.033276802 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:18 np0005592159 podman[79571]: 2026-01-22 13:36:18.796340904 +0000 UTC m=+0.224554797 container create 8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Jan 22 08:36:18 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:36:18 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22384a255fd25c21b0993a3fd4354a144d82bdc8b2276f46845c9307147aa402/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:18 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22384a255fd25c21b0993a3fd4354a144d82bdc8b2276f46845c9307147aa402/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:18 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22384a255fd25c21b0993a3fd4354a144d82bdc8b2276f46845c9307147aa402/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:18 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22384a255fd25c21b0993a3fd4354a144d82bdc8b2276f46845c9307147aa402/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:18 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22384a255fd25c21b0993a3fd4354a144d82bdc8b2276f46845c9307147aa402/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:19 np0005592159 podman[79571]: 2026-01-22 13:36:19.000703944 +0000 UTC m=+0.428917857 container init 8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:36:19 np0005592159 podman[79571]: 2026-01-22 13:36:19.007620937 +0000 UTC m=+0.435834830 container start 8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 08:36:19 np0005592159 podman[79571]: 2026-01-22 13:36:19.045691505 +0000 UTC m=+0.473905398 container attach 8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Jan 22 08:36:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate[79586]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 22 08:36:20 np0005592159 bash[79571]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 22 08:36:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate[79586]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 08:36:20 np0005592159 bash[79571]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 08:36:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate[79586]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 08:36:20 np0005592159 bash[79571]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Jan 22 08:36:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate[79586]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 08:36:20 np0005592159 bash[79571]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 22 08:36:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate[79586]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Jan 22 08:36:20 np0005592159 bash[79571]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Jan 22 08:36:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate[79586]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 22 08:36:20 np0005592159 bash[79571]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 22 08:36:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate[79586]: --> ceph-volume raw activate successful for osd ID: 2
Jan 22 08:36:20 np0005592159 bash[79571]: --> ceph-volume raw activate successful for osd ID: 2
Jan 22 08:36:20 np0005592159 systemd[1]: libpod-8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f.scope: Deactivated successfully.
Jan 22 08:36:20 np0005592159 systemd[1]: libpod-8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f.scope: Consumed 1.265s CPU time.
Jan 22 08:36:20 np0005592159 podman[79699]: 2026-01-22 13:36:20.306109566 +0000 UTC m=+0.037660338 container died 8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:36:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:25 np0005592159 systemd[1]: var-lib-containers-storage-overlay-22384a255fd25c21b0993a3fd4354a144d82bdc8b2276f46845c9307147aa402-merged.mount: Deactivated successfully.
Jan 22 08:36:27 np0005592159 podman[79699]: 2026-01-22 13:36:27.243977621 +0000 UTC m=+6.975528413 container remove 8b1d4a9e03a78fa6ef82205fec21514ca0ad0c3ced5fd3a6e0c2cdda0c38906f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2-activate, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 08:36:27 np0005592159 podman[79759]: 2026-01-22 13:36:27.452892992 +0000 UTC m=+0.024889260 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:27 np0005592159 podman[79759]: 2026-01-22 13:36:27.721066092 +0000 UTC m=+0.293062360 container create 1f90ecb4fcc015bd1f2f979a5a563080acb2d28030758941d6958f2336c7101d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:36:28 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd406aa7bdb74b2323a09e2995461363a4b1400f1ae42685b71b4e3d7c9a098/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:28 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd406aa7bdb74b2323a09e2995461363a4b1400f1ae42685b71b4e3d7c9a098/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:28 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd406aa7bdb74b2323a09e2995461363a4b1400f1ae42685b71b4e3d7c9a098/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:28 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd406aa7bdb74b2323a09e2995461363a4b1400f1ae42685b71b4e3d7c9a098/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:28 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdd406aa7bdb74b2323a09e2995461363a4b1400f1ae42685b71b4e3d7c9a098/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:28 np0005592159 podman[79759]: 2026-01-22 13:36:28.83628987 +0000 UTC m=+1.408286118 container init 1f90ecb4fcc015bd1f2f979a5a563080acb2d28030758941d6958f2336c7101d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 08:36:28 np0005592159 podman[79759]: 2026-01-22 13:36:28.843373387 +0000 UTC m=+1.415369625 container start 1f90ecb4fcc015bd1f2f979a5a563080acb2d28030758941d6958f2336c7101d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: pidfile_write: ignore empty --pid-file
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557358d31c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557358d31c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557358d31c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557358d31c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557359b3d000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557359b3d000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557359b3d000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557359b3d000 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557359b3d000 /var/lib/ceph/osd/ceph-2/block) close
Jan 22 08:36:29 np0005592159 bash[79759]: 1f90ecb4fcc015bd1f2f979a5a563080acb2d28030758941d6958f2336c7101d
Jan 22 08:36:29 np0005592159 systemd[1]: Started Ceph osd.2 for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:36:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557358d31c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: load: jerasure load: lrc 
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 08:36:29 np0005592159 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) close
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc4c00 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluefs mount
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluefs mount shared_bdev_used = 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: RocksDB version: 7.9.2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Git sha 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: DB SUMMARY
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: DB Session ID:  HGFKAE26TIJZ4TV8SS1B
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: CURRENT file:  CURRENT
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                         Options.error_if_exists: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.create_if_missing: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                                     Options.env: 0x557359bc7f10
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                                Options.info_log: 0x557358daeca0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                              Options.statistics: (nil)
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.use_fsync: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                              Options.db_log_dir: 
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                                 Options.wal_dir: db.wal
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.write_buffer_manager: 0x557359cc8460
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.unordered_write: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.row_cache: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                              Options.wal_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.two_write_queues: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.wal_compression: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.atomic_flush: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.max_background_jobs: 4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.max_background_compactions: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.max_subcompactions: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.max_open_files: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Compression algorithms supported:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kZSTD supported: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kXpressCompression supported: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kBZip2Compression supported: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kLZ4Compression supported: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kZlibCompression supported: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kSnappyCompression supported: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae720)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da4dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae720)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da4dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae720)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da4dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae720)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da4dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae720)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da4dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae720)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da4dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae720)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da4dd0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae6c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da4430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae6c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da4430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358dae6c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da4430#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 11973bfc-0335-469d-b17c-3e572773de22
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088990413810, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088990414214, "job": 1, "event": "recovery_finished"}
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: freelist init
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: freelist _read_cfg
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 7.0 GiB in 2 extents, allocator type hybrid, capacity 0x1bfc00000, block size 0x1000, free 0x1bfbfd000, fragmentation 5.5e-07
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluefs umount
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) close
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bdev(0x557359bc5400 /var/lib/ceph/osd/ceph-2/block) open size 7511998464 (0x1bfc00000, 7.0 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 7.0 GiB
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluefs mount
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluefs _init_alloc shared, id 1, capacity 0x1bfc00000, block size 0x10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluefs mount shared_bdev_used = 4718592
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,7136398540 db.slow,7136398540
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: RocksDB version: 7.9.2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Git sha 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Compile date 2025-05-06 23:30:25
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: DB SUMMARY
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: DB Session ID:  HGFKAE26TIJZ4TV8SS1A
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: CURRENT file:  CURRENT
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: IDENTITY file:  IDENTITY
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                         Options.error_if_exists: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.create_if_missing: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                         Options.paranoid_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                                     Options.env: 0x557358ef64d0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                                Options.info_log: 0x557358daf980
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_file_opening_threads: 16
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                              Options.statistics: (nil)
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.use_fsync: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.max_log_file_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                         Options.allow_fallocate: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.use_direct_reads: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.create_missing_column_families: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                              Options.db_log_dir: 
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                                 Options.wal_dir: db.wal
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.advise_random_on_open: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.write_buffer_manager: 0x557359cc8460
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                            Options.rate_limiter: (nil)
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.unordered_write: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.row_cache: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                              Options.wal_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.allow_ingest_behind: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.two_write_queues: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.manual_wal_flush: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.wal_compression: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.atomic_flush: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.log_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.allow_data_in_errors: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.db_host_id: __hostname__
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.max_background_jobs: 4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.max_background_compactions: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.max_subcompactions: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.max_open_files: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.bytes_per_sync: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.max_background_flushes: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Compression algorithms supported:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kZSTD supported: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kXpressCompression supported: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kBZip2Compression supported: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kLZ4Compression supported: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kZlibCompression supported: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kLZ4HCCompression supported: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: #011kSnappyCompression supported: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db8120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db8120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db8120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db8120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db8120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db8120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db8120)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da5350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db80a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da54b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db80a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da54b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:           Options.merge_operator: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.compaction_filter_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.sst_partitioner_factory: None
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x557358db80a0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x557358da54b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.write_buffer_size: 16777216
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.max_write_buffer_number: 64
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.compression: LZ4
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.num_levels: 7
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.level: 32767
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.compression_opts.strategy: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                  Options.compression_opts.enabled: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.arena_block_size: 1048576
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.disable_auto_compactions: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.inplace_update_support: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.bloom_locality: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                    Options.max_successive_merges: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.paranoid_file_checks: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.force_consistency_checks: 1
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.report_bg_io_stats: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                               Options.ttl: 2592000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                       Options.enable_blob_files: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                           Options.min_blob_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                          Options.blob_file_size: 268435456
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb:                Options.blob_file_starting_level: 0
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 11973bfc-0335-469d-b17c-3e572773de22
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088990682556, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088990827756, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088990, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11973bfc-0335-469d-b17c-3e572773de22", "db_session_id": "HGFKAE26TIJZ4TV8SS1A", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088990872099, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088990, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11973bfc-0335-469d-b17c-3e572773de22", "db_session_id": "HGFKAE26TIJZ4TV8SS1A", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088990901660, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088990, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "11973bfc-0335-469d-b17c-3e572773de22", "db_session_id": "HGFKAE26TIJZ4TV8SS1A", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769088990903342, "job": 1, "event": "recovery_finished"}
Jan 22 08:36:30 np0005592159 ceph-osd[79779]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x557358e77c00
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: rocksdb: DB pointer 0x557359cb3a00
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.5 total, 0.5 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.5 total, 0.5 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557358da5350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.5 total, 0.5 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557358da5350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.5 total, 0.5 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557358da5350#2 capacity: 460.80 MB usag
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: _get_class not permitted to load lua
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: _get_class not permitted to load sdk
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: _get_class not permitted to load test_remote_reads
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: osd.2 0 load_pgs
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: osd.2 0 load_pgs opened 0 pgs
Jan 22 08:36:31 np0005592159 ceph-osd[79779]: osd.2 0 log_to_monitors true
Jan 22 08:36:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:36:31.215+0000 7f4800129740 -1 osd.2 0 log_to_monitors true
Jan 22 08:36:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Jan 22 08:36:31 np0005592159 ceph-mon[77081]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 08:36:31 np0005592159 podman[80350]: 2026-01-22 13:36:31.838608097 +0000 UTC m=+0.049194754 container create c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 08:36:31 np0005592159 podman[80350]: 2026-01-22 13:36:31.814776396 +0000 UTC m=+0.025363073 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:31 np0005592159 systemd[1]: Started libpod-conmon-c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3.scope.
Jan 22 08:36:31 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:36:32 np0005592159 podman[80350]: 2026-01-22 13:36:32.064650002 +0000 UTC m=+0.275236689 container init c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:36:32 np0005592159 podman[80350]: 2026-01-22 13:36:32.074453202 +0000 UTC m=+0.285039859 container start c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:36:32 np0005592159 systemd[1]: libpod-c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3.scope: Deactivated successfully.
Jan 22 08:36:32 np0005592159 podman[80350]: 2026-01-22 13:36:32.082356001 +0000 UTC m=+0.292942758 container attach c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Jan 22 08:36:32 np0005592159 suspicious_feynman[80366]: 167 167
Jan 22 08:36:32 np0005592159 conmon[80366]: conmon c689d4272486e10734d8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3.scope/container/memory.events
Jan 22 08:36:32 np0005592159 podman[80350]: 2026-01-22 13:36:32.084486237 +0000 UTC m=+0.295072934 container died c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:36:32 np0005592159 systemd[1]: var-lib-containers-storage-overlay-12558faf242503d40865f0d494e37a99083fe0711a9e6f4fa9cf4dc7c3621013-merged.mount: Deactivated successfully.
Jan 22 08:36:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 22 08:36:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 22 08:36:32 np0005592159 podman[80350]: 2026-01-22 13:36:32.211752526 +0000 UTC m=+0.422339213 container remove c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:36:32 np0005592159 systemd[1]: libpod-conmon-c689d4272486e10734d8f2b4e72d969cac39d02955bb0873ca33ba3986bbc5e3.scope: Deactivated successfully.
Jan 22 08:36:32 np0005592159 podman[80393]: 2026-01-22 13:36:32.344023758 +0000 UTC m=+0.019059035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:32 np0005592159 podman[80393]: 2026-01-22 13:36:32.473742793 +0000 UTC m=+0.148778050 container create e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 08:36:32 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e44 e44: 3 total, 2 up, 3 in
Jan 22 08:36:32 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:32 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:32 np0005592159 ceph-mon[77081]: from='osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 08:36:32 np0005592159 ceph-mon[77081]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 22 08:36:32 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]} v 0) v1
Jan 22 08:36:32 np0005592159 ceph-mon[77081]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 08:36:32 np0005592159 systemd[1]: Started libpod-conmon-e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b.scope.
Jan 22 08:36:32 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:36:32 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce90fd6509d8d9568a34360e6265ece7a15239c8b6c3c0c009fdd9603b25762/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:32 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce90fd6509d8d9568a34360e6265ece7a15239c8b6c3c0c009fdd9603b25762/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:32 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce90fd6509d8d9568a34360e6265ece7a15239c8b6c3c0c009fdd9603b25762/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:32 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce90fd6509d8d9568a34360e6265ece7a15239c8b6c3c0c009fdd9603b25762/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:32 np0005592159 podman[80393]: 2026-01-22 13:36:32.662957412 +0000 UTC m=+0.337992709 container init e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 22 08:36:32 np0005592159 podman[80393]: 2026-01-22 13:36:32.669601308 +0000 UTC m=+0.344636605 container start e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Jan 22 08:36:32 np0005592159 podman[80393]: 2026-01-22 13:36:32.862864485 +0000 UTC m=+0.537899792 container attach e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 22 08:36:33 np0005592159 heuristic_swartz[80409]: {
Jan 22 08:36:33 np0005592159 heuristic_swartz[80409]:    "3569f689-49d4-4dc0-921b-9570c720a1f3": {
Jan 22 08:36:33 np0005592159 heuristic_swartz[80409]:        "ceph_fsid": "088fe176-0106-5401-803c-2da38b73b76a",
Jan 22 08:36:33 np0005592159 heuristic_swartz[80409]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Jan 22 08:36:33 np0005592159 heuristic_swartz[80409]:        "osd_id": 2,
Jan 22 08:36:33 np0005592159 heuristic_swartz[80409]:        "osd_uuid": "3569f689-49d4-4dc0-921b-9570c720a1f3",
Jan 22 08:36:33 np0005592159 heuristic_swartz[80409]:        "type": "bluestore"
Jan 22 08:36:33 np0005592159 heuristic_swartz[80409]:    }
Jan 22 08:36:33 np0005592159 heuristic_swartz[80409]: }
Jan 22 08:36:33 np0005592159 systemd[1]: libpod-e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b.scope: Deactivated successfully.
Jan 22 08:36:33 np0005592159 podman[80393]: 2026-01-22 13:36:33.562010206 +0000 UTC m=+1.237045453 container died e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:36:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:33 np0005592159 systemd[1]: var-lib-containers-storage-overlay-9ce90fd6509d8d9568a34360e6265ece7a15239c8b6c3c0c009fdd9603b25762-merged.mount: Deactivated successfully.
Jan 22 08:36:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e45 e45: 3 total, 2 up, 3 in
Jan 22 08:36:33 np0005592159 ceph-osd[79779]: osd.2 0 done with init, starting boot process
Jan 22 08:36:33 np0005592159 ceph-osd[79779]: osd.2 0 start_boot
Jan 22 08:36:33 np0005592159 ceph-osd[79779]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 22 08:36:33 np0005592159 ceph-osd[79779]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 22 08:36:33 np0005592159 ceph-osd[79779]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 22 08:36:33 np0005592159 ceph-osd[79779]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 22 08:36:33 np0005592159 ceph-osd[79779]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 22 08:36:33 np0005592159 ceph-mon[77081]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 22 08:36:33 np0005592159 ceph-mon[77081]: from='osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 08:36:33 np0005592159 ceph-mon[77081]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 22 08:36:33 np0005592159 podman[80393]: 2026-01-22 13:36:33.965532289 +0000 UTC m=+1.640567556 container remove e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_swartz, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:36:33 np0005592159 systemd[1]: libpod-conmon-e8295c43f0219c917f5b4f7a6696d8eff8f5deab23ab11b8b089279f08f7872b.scope: Deactivated successfully.
Jan 22 08:36:35 np0005592159 podman[80585]: 2026-01-22 13:36:34.970687152 +0000 UTC m=+0.023645507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:35 np0005592159 podman[80585]: 2026-01-22 13:36:35.092562299 +0000 UTC m=+0.145520634 container create 1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:36:35 np0005592159 ceph-mon[77081]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0068, "args": ["host=compute-2", "root=default"]}]': finished
Jan 22 08:36:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gfsxzw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 08:36:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.gfsxzw", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 08:36:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:35 np0005592159 ceph-mon[77081]: Deploying daemon rgw.rgw.compute-2.gfsxzw on compute-2
Jan 22 08:36:35 np0005592159 systemd[1]: Started libpod-conmon-1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31.scope.
Jan 22 08:36:35 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:36:35 np0005592159 podman[80585]: 2026-01-22 13:36:35.629626567 +0000 UTC m=+0.682584922 container init 1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 22 08:36:35 np0005592159 podman[80585]: 2026-01-22 13:36:35.63617112 +0000 UTC m=+0.689129455 container start 1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Jan 22 08:36:35 np0005592159 inspiring_chatterjee[80601]: 167 167
Jan 22 08:36:35 np0005592159 systemd[1]: libpod-1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31.scope: Deactivated successfully.
Jan 22 08:36:35 np0005592159 podman[80585]: 2026-01-22 13:36:35.803338196 +0000 UTC m=+0.856296531 container attach 1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chatterjee, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Jan 22 08:36:35 np0005592159 podman[80585]: 2026-01-22 13:36:35.805506454 +0000 UTC m=+0.858464819 container died 1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chatterjee, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:36:35 np0005592159 systemd[1]: var-lib-containers-storage-overlay-fe55f46ab7dbd8482d9592c955d295a9533bb2328b472113e026868a56d68a31-merged.mount: Deactivated successfully.
Jan 22 08:36:36 np0005592159 podman[80585]: 2026-01-22 13:36:36.427695417 +0000 UTC m=+1.480653752 container remove 1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_chatterjee, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 08:36:36 np0005592159 systemd[1]: libpod-conmon-1020bf408e3e57128acfe98f1ed9ec82957eeab237a392cbb987db8917559f31.scope: Deactivated successfully.
Jan 22 08:36:36 np0005592159 systemd[1]: Reloading.
Jan 22 08:36:36 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:36:36 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:36:37 np0005592159 systemd[1]: Reloading.
Jan 22 08:36:37 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:36:37 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:36:37 np0005592159 systemd[1]: Starting Ceph rgw.rgw.compute-2.gfsxzw for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:36:37 np0005592159 podman[80750]: 2026-01-22 13:36:37.892035877 +0000 UTC m=+0.072831199 container create 49e687254f675aca5071ee91f471edf46c03564ea189efa6346b4d0c66cd7dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-2-gfsxzw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:36:37 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c0630db1aa3168f009364b4e271af26cc7d640ab40f4aa8151f0310302f5b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:37 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c0630db1aa3168f009364b4e271af26cc7d640ab40f4aa8151f0310302f5b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:37 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c0630db1aa3168f009364b4e271af26cc7d640ab40f4aa8151f0310302f5b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:37 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35c0630db1aa3168f009364b4e271af26cc7d640ab40f4aa8151f0310302f5b9/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-2.gfsxzw supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:37 np0005592159 podman[80750]: 2026-01-22 13:36:37.85852452 +0000 UTC m=+0.039319872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:37 np0005592159 podman[80750]: 2026-01-22 13:36:37.973127174 +0000 UTC m=+0.153922576 container init 49e687254f675aca5071ee91f471edf46c03564ea189efa6346b4d0c66cd7dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-2-gfsxzw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:36:37 np0005592159 podman[80750]: 2026-01-22 13:36:37.981707381 +0000 UTC m=+0.162502733 container start 49e687254f675aca5071ee91f471edf46c03564ea189efa6346b4d0c66cd7dc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-2-gfsxzw, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:36:37 np0005592159 bash[80750]: 49e687254f675aca5071ee91f471edf46c03564ea189efa6346b4d0c66cd7dc0
Jan 22 08:36:37 np0005592159 systemd[1]: Started Ceph rgw.rgw.compute-2.gfsxzw for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:36:38 np0005592159 radosgw[80769]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:36:38 np0005592159 radosgw[80769]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Jan 22 08:36:38 np0005592159 radosgw[80769]: framework: beast
Jan 22 08:36:38 np0005592159 radosgw[80769]: framework conf key: endpoint, val: 192.168.122.102:8082
Jan 22 08:36:38 np0005592159 radosgw[80769]: init_numa not setting numa affinity
Jan 22 08:36:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e46 e46: 3 total, 2 up, 3 in
Jan 22 08:36:40 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Jan 22 08:36:40 np0005592159 ceph-mon[77081]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 08:36:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e47 e47: 3 total, 2 up, 3 in
Jan 22 08:36:42 np0005592159 ceph-osd[79779]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 18.851 iops: 4825.905 elapsed_sec: 0.622
Jan 22 08:36:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : OSD bench result of 4825.905468 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 08:36:42 np0005592159 ceph-osd[79779]: osd.2 0 waiting for initial osdmap
Jan 22 08:36:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:36:42.420+0000 7f47fc0a9640 -1 osd.2 0 waiting for initial osdmap
Jan 22 08:36:42 np0005592159 ceph-osd[79779]: osd.2 40 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 22 08:36:42 np0005592159 ceph-osd[79779]: osd.2 40 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 22 08:36:42 np0005592159 ceph-osd[79779]: osd.2 40 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 22 08:36:42 np0005592159 ceph-osd[79779]: osd.2 40 check_osdmap_features require_osd_release unknown -> reef
Jan 22 08:36:42 np0005592159 ceph-osd[79779]: osd.2 47 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 22 08:36:42 np0005592159 ceph-osd[79779]: osd.2 47 set_numa_affinity not setting numa affinity
Jan 22 08:36:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:36:42.464+0000 7f47f76d1640 -1 osd.2 47 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 22 08:36:42 np0005592159 ceph-osd[79779]: osd.2 47 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Jan 22 08:36:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:42 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 08:36:42 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 22 08:36:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.thdhdp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 08:36:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.thdhdp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 08:36:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:42 np0005592159 ceph-mon[77081]: Deploying daemon rgw.rgw.compute-1.thdhdp on compute-1
Jan 22 08:36:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e48 e48: 3 total, 2 up, 3 in
Jan 22 08:36:43 np0005592159 ceph-osd[79779]: osd.2 47 tick checking mon for new map
Jan 22 08:36:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:43 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 22 08:36:43 np0005592159 ceph-mon[77081]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:36:43 np0005592159 ceph-mon[77081]: OSD bench result of 4825.905468 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 22 08:36:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.iqhnfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 22 08:36:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.iqhnfa", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 22 08:36:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:43 np0005592159 ceph-mon[77081]: Deploying daemon rgw.rgw.compute-0.iqhnfa on compute-0
Jan 22 08:36:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e49 e49: 3 total, 3 up, 3 in
Jan 22 08:36:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Jan 22 08:36:44 np0005592159 ceph-mon[77081]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 49 state: booting -> active
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[4.1d( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[2.1b( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[7.1d( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[2.13( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[2.15( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[5.12( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[2.10( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[5.b( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[2.c( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[2.d( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[5.d( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[2.a( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[3.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=0 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[5.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[4.2( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[4.6( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[4.3( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[3.8( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=0 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[6.1( empty local-lis/les=0/0 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=49) [2] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[4.1c( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[3.1b( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=0 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[4.19( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[7.a( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[5.8( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[4.14( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[7.14( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 49 pg[5.13( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-mon[77081]: osd.2 [v2:192.168.122.102:6800/892178328,v1:192.168.122.102:6801/892178328] boot
Jan 22 08:36:45 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.101:0/3143195983' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:45 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:45 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:45 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 22 08:36:45 np0005592159 ceph-mon[77081]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:36:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e50 e50: 3 total, 3 up, 3 in
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.15( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.12( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.9( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.f( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.e( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.e( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.5( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.1( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.1a( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.18( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.15( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.16( empty local-lis/les=0/0 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.11( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.1f( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.8( empty local-lis/les=0/0 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.1c( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.1d( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.b( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=0/0 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.1d( empty local-lis/les=0/0 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.9( empty local-lis/les=0/0 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.1d( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.1b( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.15( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.0( empty local-lis/les=49/50 n=0 ec=16/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=0 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.d( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.1d( empty local-lis/les=49/50 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.a( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.10( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.b( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.12( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.13( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.15( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.2( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.0( empty local-lis/les=49/50 n=0 ec=20/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.3( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.6( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[6.1( empty local-lis/les=49/50 n=0 ec=37/22 lis/c=37/37 les/c/f=38/38/0 sis=49) [2] r=0 lpr=49 pi=[37,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.8( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=0 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.1c( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.19( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.1b( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=20/20 les/c/f=21/21/0 sis=49) [2] r=0 lpr=49 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=49/50 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.14( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.13( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=49/50 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.8( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=36/36 les/c/f=37/37/0 sis=49) [2] r=0 lpr=49 pi=[36,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.14( empty local-lis/les=49/50 n=0 ec=40/24 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=49 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.c( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.12( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.d( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=49 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.9( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.f( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.5( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.e( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.e( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.1f( empty local-lis/les=49/50 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=49/50 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.1( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.15( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.18( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.11( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.1f( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[4.8( empty local-lis/les=49/50 n=0 ec=36/18 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.1a( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[7.16( empty local-lis/les=49/50 n=0 ec=40/24 lis/c=40/40 les/c/f=41/41/0 sis=49) [2] r=0 lpr=50 pi=[40,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=49/50 n=0 ec=36/20 lis/c=42/42 les/c/f=43/43/0 sis=49) [2] r=0 lpr=50 pi=[42,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.1d( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.b( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.1d( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[2.1c( empty local-lis/les=49/50 n=0 ec=20/14 lis/c=20/20 les/c/f=22/22/0 sis=49) [2] r=0 lpr=50 pi=[20,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 50 pg[3.9( empty local-lis/les=49/50 n=0 ec=20/16 lis/c=28/28 les/c/f=29/29/0 sis=49) [2] r=0 lpr=50 pi=[28,49)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:36:46 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 22 08:36:46 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 22 08:36:46 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:46 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zycvef", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 08:36:46 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zycvef", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 08:36:46 np0005592159 podman[80972]: 2026-01-22 13:36:46.577392456 +0000 UTC m=+0.022744183 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:47 np0005592159 podman[80972]: 2026-01-22 13:36:47.53990238 +0000 UTC m=+0.985254087 container create d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:36:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e51 e51: 3 total, 3 up, 3 in
Jan 22 08:36:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Jan 22 08:36:48 np0005592159 ceph-mon[77081]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:48 np0005592159 systemd[1]: Started libpod-conmon-d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823.scope.
Jan 22 08:36:48 np0005592159 ceph-mon[77081]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 22 08:36:48 np0005592159 ceph-mon[77081]: Deploying daemon mds.cephfs.compute-2.zycvef on compute-2
Jan 22 08:36:48 np0005592159 ceph-mon[77081]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:36:48 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:36:48 np0005592159 podman[80972]: 2026-01-22 13:36:48.110653081 +0000 UTC m=+1.556004868 container init d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:36:48 np0005592159 podman[80972]: 2026-01-22 13:36:48.120695527 +0000 UTC m=+1.566047234 container start d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_faraday, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 22 08:36:48 np0005592159 keen_faraday[80989]: 167 167
Jan 22 08:36:48 np0005592159 systemd[1]: libpod-d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823.scope: Deactivated successfully.
Jan 22 08:36:48 np0005592159 podman[80972]: 2026-01-22 13:36:48.141548909 +0000 UTC m=+1.586900646 container attach d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_faraday, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 22 08:36:48 np0005592159 podman[80972]: 2026-01-22 13:36:48.142695139 +0000 UTC m=+1.588046846 container died d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_faraday, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 08:36:48 np0005592159 systemd[1]: var-lib-containers-storage-overlay-542cab6476e614a5425a84c1c9049293258912d4c918c7eefc49479b4459ad2a-merged.mount: Deactivated successfully.
Jan 22 08:36:48 np0005592159 podman[80972]: 2026-01-22 13:36:48.212625821 +0000 UTC m=+1.657977528 container remove d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_faraday, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:36:48 np0005592159 systemd[1]: libpod-conmon-d536d64c7be5fa8377d0b334df9c2ac4c694a02b173342bad6aabfb8b664b823.scope: Deactivated successfully.
Jan 22 08:36:48 np0005592159 systemd[1]: Reloading.
Jan 22 08:36:48 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:36:48 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:36:48 np0005592159 systemd[1]: Reloading.
Jan 22 08:36:48 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:36:48 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:36:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e52 e52: 3 total, 3 up, 3 in
Jan 22 08:36:48 np0005592159 systemd[1]: Starting Ceph mds.cephfs.compute-2.zycvef for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:36:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 22 08:36:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 22 08:36:49 np0005592159 podman[81134]: 2026-01-22 13:36:49.149774113 +0000 UTC m=+0.062789224 container create 28402c8a6e0adf22561a923d42802647af00df10eacceb300a94fe8b5f18bf63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-2-zycvef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:36:49 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7af02108f933d0bcb8c89c30d24a97786ef6bd18fd90154e0884f5f96987649/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:49 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7af02108f933d0bcb8c89c30d24a97786ef6bd18fd90154e0884f5f96987649/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:49 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7af02108f933d0bcb8c89c30d24a97786ef6bd18fd90154e0884f5f96987649/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:49 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7af02108f933d0bcb8c89c30d24a97786ef6bd18fd90154e0884f5f96987649/merged/var/lib/ceph/mds/ceph-cephfs.compute-2.zycvef supports timestamps until 2038 (0x7fffffff)
Jan 22 08:36:49 np0005592159 podman[81134]: 2026-01-22 13:36:49.110817201 +0000 UTC m=+0.023832312 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:36:49 np0005592159 podman[81134]: 2026-01-22 13:36:49.357233106 +0000 UTC m=+0.270248317 container init 28402c8a6e0adf22561a923d42802647af00df10eacceb300a94fe8b5f18bf63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-2-zycvef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:36:49 np0005592159 podman[81134]: 2026-01-22 13:36:49.36454738 +0000 UTC m=+0.277562531 container start 28402c8a6e0adf22561a923d42802647af00df10eacceb300a94fe8b5f18bf63 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-2-zycvef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Jan 22 08:36:49 np0005592159 bash[81134]: 28402c8a6e0adf22561a923d42802647af00df10eacceb300a94fe8b5f18bf63
Jan 22 08:36:49 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.101:0/3143195983' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592159 ceph-mon[77081]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:36:49 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/3865277149' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.102:0/38428064' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:49 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 22 08:36:49 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 08:36:49 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/3865277149' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 08:36:49 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 22 08:36:49 np0005592159 systemd[1]: Started Ceph mds.cephfs.compute-2.zycvef for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:36:49 np0005592159 ceph-mds[81154]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:36:49 np0005592159 ceph-mds[81154]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Jan 22 08:36:49 np0005592159 ceph-mds[81154]: main not setting numa affinity
Jan 22 08:36:49 np0005592159 ceph-mds[81154]: pidfile_write: ignore empty --pid-file
Jan 22 08:36:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-2-zycvef[81150]: starting mds.cephfs.compute-2.zycvef at 
Jan 22 08:36:49 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef Updating MDS map to version 2 from mon.1
Jan 22 08:36:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 22 08:36:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 22 08:36:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zjixst", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 08:36:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.zjixst", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 08:36:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e3 new map
Jan 22 08:36:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:35:18.163248+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.zycvef{-1:24139} state up:standby seq 1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:36:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e53 e53: 3 total, 3 up, 3 in
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef Updating MDS map to version 3 from mon.1
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef Monitors have assigned me to become a standby.
Jan 22 08:36:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Jan 22 08:36:51 np0005592159 ceph-mon[77081]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/3083812118' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e4 new map
Jan 22 08:36:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:51.171709+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.zycvef{0:24139} state up:creating seq 1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef Updating MDS map to version 4 from mon.1
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.4 handle_mds_map i am now mds.0.4
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x1
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x100
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x600
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x601
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x602
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x603
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x604
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x605
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x606
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x607
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x608
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.cache creating system inode with ino:0x609
Jan 22 08:36:51 np0005592159 ceph-mds[81154]: mds.0.4 creating_done
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: Deploying daemon mds.cephfs.compute-0.zjixst on compute-0
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: daemon mds.cephfs.compute-2.zycvef assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: Cluster is now healthy
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.101:0/1101481797' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.102:0/3083812118' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: daemon mds.cephfs.compute-2.zycvef is now active in filesystem cephfs as rank 0
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e5 new map
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:52.245537+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e54 e54: 3 total, 3 up, 3 in
Jan 22 08:36:52 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef Updating MDS map to version 5 from mon.1
Jan 22 08:36:52 np0005592159 ceph-mds[81154]: mds.0.4 handle_mds_map i am now mds.0.4
Jan 22 08:36:52 np0005592159 ceph-mds[81154]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 22 08:36:52 np0005592159 ceph-mds[81154]: mds.0.4 recovery_done -- successful recovery!
Jan 22 08:36:52 np0005592159 ceph-mds[81154]: mds.0.4 active_start
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Jan 22 08:36:52 np0005592159 ceph-mon[77081]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/3083812118' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 22 08:36:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 22 08:36:53 np0005592159 ceph-mon[77081]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 22 08:36:53 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 08:36:53 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 08:36:53 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 22 08:36:53 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.102:0/3083812118' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.101:0/1101481797' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 22 08:36:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e6 new map
Jan 22 08:36:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e6 print_map#012e6#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:52.245537+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:36:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e55 e55: 3 total, 3 up, 3 in
Jan 22 08:36:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e7 new map
Jan 22 08:36:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e7 print_map#012e7#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:52.245537+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:36:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:54 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.100:0/2562405514' entity='client.rgw.rgw.compute-0.iqhnfa' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 08:36:54 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-2.gfsxzw' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 08:36:54 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.rgw.rgw.compute-1.thdhdp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 22 08:36:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:54 np0005592159 ceph-mon[77081]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 22 08:36:54 np0005592159 ceph-mon[77081]: Cluster is now healthy
Jan 22 08:36:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 22 08:36:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 22 08:36:56 np0005592159 radosgw[80769]: LDAP not started since no server URIs were provided in the configuration.
Jan 22 08:36:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-rgw-rgw-compute-2-gfsxzw[80765]: 2026-01-22T13:36:56.186+0000 7f948b851940 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 22 08:36:56 np0005592159 radosgw[80769]: framework: beast
Jan 22 08:36:56 np0005592159 radosgw[80769]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 22 08:36:56 np0005592159 radosgw[80769]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 22 08:36:56 np0005592159 radosgw[80769]: starting handler: beast
Jan 22 08:36:56 np0005592159 radosgw[80769]: set uid:gid to 167:167 (ceph:ceph)
Jan 22 08:36:56 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 22 08:36:56 np0005592159 radosgw[80769]: mgrc service_daemon_register rgw.24151 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.102:8082,frontend_type#0=beast,hostname=compute-2,id=rgw.compute-2.gfsxzw,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=9ef52632-dffc-43fe-ad78-aca5b0d3574d,zone_name=default,zonegroup_id=961906d1-4e51-43eb-bd43-c4a4ab081aea,zonegroup_name=default}
Jan 22 08:36:56 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 22 08:36:56 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 22 08:36:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 22 08:36:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 22 08:36:57 np0005592159 ceph-mds[81154]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 22 08:36:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mds-cephfs-compute-2-zycvef[81150]: 2026-01-22T13:36:57.205+0000 7f4cb34e4640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 22 08:36:57 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 22 08:36:57 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 22 08:36:57 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:36:57 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ofmmzj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 22 08:36:57 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.ofmmzj", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 22 08:36:57 np0005592159 ceph-mon[77081]: Deploying daemon mds.cephfs.compute-1.ofmmzj on compute-1
Jan 22 08:36:57 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:36:57 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 22 08:36:57 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 22 08:36:57 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 22 08:36:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e56 e56: 3 total, 3 up, 3 in
Jan 22 08:36:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:36:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 22 08:36:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 22 08:37:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e8 new map
Jan 22 08:37:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e8 print_map#012e8#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:36:52.245537+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 2 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:37:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e57 e57: 3 total, 3 up, 3 in
Jan 22 08:37:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:37:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 22 08:37:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 22 08:37:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e58 e58: 3 total, 3 up, 3 in
Jan 22 08:37:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:37:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 22 08:37:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 22 08:37:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 22 08:37:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e59 e59: 3 total, 3 up, 3 in
Jan 22 08:37:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e9 new map
Jan 22 08:37:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e9 print_map#012e9#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:37:03.744747+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 5 join_fscid=1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:37:03 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef Updating MDS map to version 9 from mon.1
Jan 22 08:37:04 np0005592159 ceph-mon[77081]: Deploying daemon haproxy.rgw.default.compute-0.erkqlp on compute-0
Jan 22 08:37:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 22 08:37:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e60 e60: 3 total, 3 up, 3 in
Jan 22 08:37:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 22 08:37:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 22 08:37:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e10 new map
Jan 22 08:37:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).mds e10 print_map#012e10#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0119#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-01-22T13:35:18.163168+0000#012modified#0112026-01-22T13:37:03.744747+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=24139}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012[mds.cephfs.compute-2.zycvef{0:24139} state up:active seq 5 join_fscid=1 addr [v2:192.168.122.102:6804/2301191554,v1:192.168.122.102:6805/2301191554] compat {c=[1],r=[1],i=[7ff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.zjixst{-1:14337} state up:standby seq 4 join_fscid=1 addr [v2:192.168.122.100:6806/2895449706,v1:192.168.122.100:6807/2895449706] compat {c=[1],r=[1],i=[7ff]}]#012[mds.cephfs.compute-1.ofmmzj{-1:24140} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2522830803,v1:192.168.122.101:6805/2522830803] compat {c=[1],r=[1],i=[7ff]}]
Jan 22 08:37:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e61 e61: 3 total, 3 up, 3 in
Jan 22 08:37:06 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:06 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 22 08:37:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 22 08:37:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 22 08:37:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 22 08:37:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 22 08:37:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.003000079s ======
Jan 22 08:37:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:10.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Jan 22 08:37:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:12.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e62 e62: 3 total, 3 up, 3 in
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.1e( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.1c( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.2( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.3( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.16( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.a( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.9( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.11( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.8( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.b( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.3( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.4( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.6( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.19( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.1f( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.10( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.11( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.f( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.e( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.d( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.a( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.f( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.1( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.3( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[10.12( empty local-lis/les=0/0 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.13( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.16( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.15( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.5( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[11.17( empty local-lis/les=0/0 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 62 pg[8.c( empty local-lis/les=0/0 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 22 08:37:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:37:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 22 08:37:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:14.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:16.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:18.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e62 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.552034855s, txc = 0x55735af63200
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.551962852s, txc = 0x557359b88300
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.551544666s, txc = 0x557359b88c00
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.551220417s, txc = 0x55735a61cf00
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.551022530s, txc = 0x55735a27e300
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.550522327s, txc = 0x557359bde000
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.550064087s, txc = 0x55735af63500
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.549757481s, txc = 0x55735a61d200
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.549332619s, txc = 0x55735a61d500
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.549151421s, txc = 0x55735a27e600
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.548954487s, txc = 0x55735a2f6000
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.548785686s, txc = 0x55735a2f6300
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.548469543s, txc = 0x557359b88f00
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.548099995s, txc = 0x55735af63800
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.547780991s, txc = 0x55735a61d800
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.547410965s, txc = 0x557359bde300
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.547196388s, txc = 0x55735a2f6600
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.546813488s, txc = 0x557359b89200
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.546355247s, txc = 0x557359b89500
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.545954704s, txc = 0x55735a61db00
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.545588970s, txc = 0x55735a7fcf00
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.545316696s, txc = 0x55735a8acf00
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.545132637s, txc = 0x55735a8ad200
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.544943810s, txc = 0x55735a8ad500
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.544633865s, txc = 0x55735a7fc300
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.544329643s, txc = 0x55735a7fd200
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.543985367s, txc = 0x55735a7fd800
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.543646336s, txc = 0x55735a7fc600
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.543431759s, txc = 0x55735a8ad800
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.543244839s, txc = 0x55735a8adb00
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.542670727s, txc = 0x55735b635200
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.723423004s, txc = 0x557359bde600
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.723418713s, txc = 0x55735a7fdb00
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.723375797s, txc = 0x55735b508000
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.723512173s, txc = 0x55735b226f00
Jan 22 08:37:19 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.723829269s, txc = 0x55735af63b00
Jan 22 08:37:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e63 e63: 3 total, 3 up, 3 in
Jan 22 08:37:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:20.229+0000 7f47f8ed4640 -1 osd.2 63 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:20 np0005592159 ceph-osd[79779]: osd.2 63 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:37:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:37:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 22 08:37:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:37:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 22 08:37:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:20.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:21.262+0000 7f47f8ed4640 -1 osd.2 63 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:21 np0005592159 ceph-osd[79779]: osd.2 63 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:22.310+0000 7f47f8ed4640 -1 osd.2 63 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:22 np0005592159 ceph-osd[79779]: osd.2 63 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:37:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:22.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:37:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e64 e64: 3 total, 3 up, 3 in
Jan 22 08:37:23 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:23 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 22 08:37:23 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:23 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.1c( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:23 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.2( v 48'8 (0'0,48'8] local-lis/les=62/64 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:23.287+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:23 np0005592159 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:24.263+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.9( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.3( v 58'2 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.a( v 58'2 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.8( v 58'2 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.16( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.b( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.6( v 48'8 (0'0,48'8] local-lis/les=62/64 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.19( v 58'2 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.e( v 58'2 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.11( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.3( v 48'8 (0'0,48'8] local-lis/les=62/64 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.a( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.13( v 58'2 lc 0'0 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.d( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.16( v 58'2 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.1f( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.15( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[11.17( v 58'2 (0'0,58'2] local-lis/les=62/64 n=0 ec=60/53 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.5( v 48'8 (0'0,48'8] local-lis/les=62/64 n=1 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.12( v 58'96 (0'0,58'96] local-lis/les=62/64 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.1( v 58'96 (0'0,58'96] local-lis/les=62/64 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.f( v 48'8 lc 0'0 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.f( v 58'96 (0'0,58'96] local-lis/les=62/64 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.10( v 58'96 (0'0,58'96] local-lis/les=62/64 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.4( v 58'96 (0'0,58'96] local-lis/les=62/64 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.11( v 58'96 (0'0,58'96] local-lis/les=62/64 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.3( v 61'99 lc 57'84 (0'0,61'99] local-lis/les=62/64 n=1 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=61'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[10.1e( v 58'96 (0'0,58'96] local-lis/les=62/64 n=0 ec=60/51 lis/c=60/60 les/c/f=61/61/0 sis=62) [2] r=0 lpr=62 pi=[60,62)/1 crt=58'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 64 pg[8.c( v 48'8 (0'0,48'8] local-lis/les=62/64 n=0 ec=58/46 lis/c=58/58 les/c/f=59/59/0 sis=62) [2] r=0 lpr=62 pi=[58,62)/1 crt=48'8 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:24 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:24 np0005592159 ceph-mon[77081]: Deploying daemon haproxy.rgw.default.compute-2.zogxki on compute-2
Jan 22 08:37:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:24.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:25 np0005592159 systemd-logind[787]: New session 33 of user zuul.
Jan 22 08:37:25 np0005592159 systemd[1]: Started Session 33 of User zuul.
Jan 22 08:37:25 np0005592159 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:25.213+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:26 np0005592159 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 22 08:37:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:26.200+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 22 08:37:26 np0005592159 python3.9[82084]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:37:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:26.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:27 np0005592159 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:27.180+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:27 np0005592159 podman[81879]: 2026-01-22 13:37:27.27433326 +0000 UTC m=+5.942712441 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 22 08:37:27 np0005592159 podman[81879]: 2026-01-22 13:37:27.298491143 +0000 UTC m=+5.966870294 container create c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f (image=quay.io/ceph/haproxy:2.3, name=loving_goodall)
Jan 22 08:37:27 np0005592159 systemd[1]: Started libpod-conmon-c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f.scope.
Jan 22 08:37:27 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:37:27 np0005592159 podman[81879]: 2026-01-22 13:37:27.403955092 +0000 UTC m=+6.072334273 container init c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f (image=quay.io/ceph/haproxy:2.3, name=loving_goodall)
Jan 22 08:37:27 np0005592159 podman[81879]: 2026-01-22 13:37:27.413378267 +0000 UTC m=+6.081757438 container start c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f (image=quay.io/ceph/haproxy:2.3, name=loving_goodall)
Jan 22 08:37:27 np0005592159 podman[81879]: 2026-01-22 13:37:27.41868963 +0000 UTC m=+6.087068801 container attach c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f (image=quay.io/ceph/haproxy:2.3, name=loving_goodall)
Jan 22 08:37:27 np0005592159 loving_goodall[82239]: 0 0
Jan 22 08:37:27 np0005592159 systemd[1]: libpod-c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f.scope: Deactivated successfully.
Jan 22 08:37:27 np0005592159 conmon[82239]: conmon c7e98b80654d1d1e4c8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f.scope/container/memory.events
Jan 22 08:37:27 np0005592159 podman[81879]: 2026-01-22 13:37:27.421131696 +0000 UTC m=+6.089510877 container died c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f (image=quay.io/ceph/haproxy:2.3, name=loving_goodall)
Jan 22 08:37:27 np0005592159 systemd[1]: var-lib-containers-storage-overlay-7afb65ef436f8de8211342ae0f3f01e8b45e5591ea29bd0d6446be2c2825b425-merged.mount: Deactivated successfully.
Jan 22 08:37:27 np0005592159 podman[81879]: 2026-01-22 13:37:27.470664065 +0000 UTC m=+6.139043216 container remove c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f (image=quay.io/ceph/haproxy:2.3, name=loving_goodall)
Jan 22 08:37:27 np0005592159 systemd[1]: libpod-conmon-c7e98b80654d1d1e4c8a1c2aec22c73479f653978ce385be2fe18ad25b407f4f.scope: Deactivated successfully.
Jan 22 08:37:27 np0005592159 systemd[1]: Reloading.
Jan 22 08:37:27 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:37:27 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:37:27 np0005592159 systemd[1]: Reloading.
Jan 22 08:37:27 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:37:27 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:37:28 np0005592159 systemd[1]: Starting Ceph haproxy.rgw.default.compute-2.zogxki for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:37:28 np0005592159 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:28.217+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:28 np0005592159 python3.9[82463]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:37:28 np0005592159 podman[82513]: 2026-01-22 13:37:28.303426176 +0000 UTC m=+0.021754869 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 22 08:37:28 np0005592159 podman[82513]: 2026-01-22 13:37:28.507057928 +0000 UTC m=+0.225386601 container create ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:37:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:28.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:28 np0005592159 ceph-mon[77081]: Health check failed: 2 slow ops, oldest one blocked for 36 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:28 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:29 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb07ffd7b803d428dbe6adac05a87d7037dad80cef11765c51e4ad5be67c2ac1/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 22 08:37:29 np0005592159 podman[82513]: 2026-01-22 13:37:29.014283463 +0000 UTC m=+0.732612166 container init ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:37:29 np0005592159 podman[82513]: 2026-01-22 13:37:29.020564172 +0000 UTC m=+0.738892845 container start ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:37:29 np0005592159 bash[82513]: ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f
Jan 22 08:37:29 np0005592159 systemd[1]: Started Ceph haproxy.rgw.default.compute-2.zogxki for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:37:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki[82538]: [NOTICE] 021/133729 (2) : New worker #1 (4) forked
Jan 22 08:37:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:29.207+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:29 np0005592159 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:30 np0005592159 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:30.193+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:30.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:30.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:31 np0005592159 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 22 08:37:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:31.201+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 22 08:37:32 np0005592159 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:32.173+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:32.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:32.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:33 np0005592159 ceph-osd[79779]: osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:33.125+0000 7f47f8ed4640 -1 osd.2 64 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e65 e65: 3 total, 3 up, 3 in
Jan 22 08:37:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 08:37:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:33 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 41 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:34 np0005592159 ceph-osd[79779]: osd.2 65 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:34.079+0000 7f47f8ed4640 -1 osd.2 65 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:34.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:34.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e66 e66: 3 total, 3 up, 3 in
Jan 22 08:37:34 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:34 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:34 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:34 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:34 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:34 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:34 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:34 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=65) [2] r=0 lpr=66 pi=[59,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:35 np0005592159 ceph-osd[79779]: osd.2 66 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:35.039+0000 7f47f8ed4640 -1 osd.2 66 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 22 08:37:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 08:37:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:35 np0005592159 ceph-mon[77081]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 08:37:35 np0005592159 ceph-mon[77081]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 08:37:35 np0005592159 ceph-mon[77081]: Deploying daemon keepalived.rgw.default.compute-0.hawera on compute-0
Jan 22 08:37:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e67 e67: 3 total, 3 up, 3 in
Jan 22 08:37:36 np0005592159 ceph-osd[79779]: osd.2 67 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:35.999+0000 7f47f8ed4640 -1 osd.2 67 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:36.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 22 08:37:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 22 08:37:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 22 08:37:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:36.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:36 np0005592159 ceph-osd[79779]: osd.2 67 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:36.994+0000 7f47f8ed4640 -1 osd.2 67 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:37 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e68 e68: 3 total, 3 up, 3 in
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.7( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.1b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.b( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.1f( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.3( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.13( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 68 pg[9.17( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=68) [2]/[0] r=-1 lpr=68 pi=[59,68)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:37 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 22 08:37:37 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 47 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: osd.2 68 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:37.994+0000 7f47f8ed4640 -1 osd.2 68 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e69 e69: 3 total, 3 up, 3 in
Jan 22 08:37:38 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 69 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2] r=0 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:38 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 69 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2] r=0 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:38 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 69 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2] r=0 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:38 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 69 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=69) [2] r=0 lpr=69 pi=[59,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:38.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:38.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:38.967+0000 7f47f8ed4640 -1 osd.2 69 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:38 np0005592159 ceph-osd[79779]: osd.2 69 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:39 np0005592159 systemd[1]: session-33.scope: Deactivated successfully.
Jan 22 08:37:39 np0005592159 systemd[1]: session-33.scope: Consumed 8.983s CPU time.
Jan 22 08:37:39 np0005592159 systemd-logind[787]: Session 33 logged out. Waiting for processes to exit.
Jan 22 08:37:39 np0005592159 systemd-logind[787]: Removed session 33.
Jan 22 08:37:39 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:39 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 22 08:37:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e70 e70: 3 total, 3 up, 3 in
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.15( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.b( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=62'690 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.b( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=0/0 n=7 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=62'704 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=0/0 n=7 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'704 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.13( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=62'690 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.13( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.7( v 61'690 (0'0,61'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=61'690 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.7( v 61'690 (0'0,61'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=61'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=0/0 n=3 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=61'686 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.5( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.3( v 62'698 (0'0,62'698] local-lis/les=0/0 n=6 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=62'698 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.3( v 62'698 (0'0,62'698] local-lis/les=0/0 n=6 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'698 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[59,70)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=0/0 n=3 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=61'686 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.17( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 luod=0'0 crt=62'690 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 70 pg[9.17( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:39.994+0000 7f47f8ed4640 -1 osd.2 70 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: osd.2 70 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 22 08:37:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 22 08:37:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e71 e71: 3 total, 3 up, 3 in
Jan 22 08:37:40 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.7( v 61'690 (0'0,61'690] local-lis/les=70/71 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=61'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:40 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:40 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:40 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:40 np0005592159 ceph-mon[77081]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 22 08:37:40 np0005592159 ceph-mon[77081]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 22 08:37:40 np0005592159 ceph-mon[77081]: Deploying daemon keepalived.rgw.default.compute-2.xbsrtt on compute-2
Jan 22 08:37:40 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=70/71 n=7 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'704 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:40 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.b( v 62'690 (0'0,62'690] local-lis/les=70/71 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:40 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.3( v 62'698 (0'0,62'698] local-lis/les=70/71 n=6 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'698 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:40 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=70/71 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:40 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=70/71 n=3 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=61'686 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:40 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.13( v 62'690 (0'0,62'690] local-lis/les=70/71 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:40 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 71 pg[9.17( v 62'690 (0'0,62'690] local-lis/les=70/71 n=5 ec=59/49 lis/c=68/59 les/c/f=69/60/0 sis=70) [2] r=0 lpr=70 pi=[59,70)/1 crt=62'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:40.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:40.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.e deep-scrub starts
Jan 22 08:37:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:41.019+0000 7f47f8ed4640 -1 osd.2 71 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:41 np0005592159 ceph-osd[79779]: osd.2 71 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.e deep-scrub ok
Jan 22 08:37:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e72 e72: 3 total, 3 up, 3 in
Jan 22 08:37:41 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.5( v 62'695 (0'0,62'695] local-lis/les=0/0 n=6 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:41 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:41 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=0/0 n=7 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 luod=0'0 crt=62'705 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:41 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 luod=0'0 crt=62'690 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:41 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=0/0 n=7 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'705 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:41 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.5( v 62'695 (0'0,62'695] local-lis/les=0/0 n=6 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:41 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=0/0 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'690 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:41 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 72 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:41 np0005592159 ceph-osd[79779]: osd.2 72 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:41.975+0000 7f47f8ed4640 -1 osd.2 72 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:42.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:42 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e73 e73: 3 total, 3 up, 3 in
Jan 22 08:37:42 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 73 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=72/73 n=7 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'705 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:42 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 73 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=72/73 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'690 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:42 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 73 pg[9.5( v 62'695 (0'0,62'695] local-lis/les=72/73 n=6 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:42 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 73 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=72/73 n=5 ec=59/49 lis/c=70/59 les/c/f=71/60/0 sis=72) [2] r=0 lpr=72 pi=[59,72)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:37:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:42.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:42.940+0000 7f47f8ed4640 -1 osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:42 np0005592159 ceph-osd[79779]: osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:43 np0005592159 podman[82737]: 2026-01-22 13:37:43.262776428 +0000 UTC m=+3.369691115 container create 20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_blackburn, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, distribution-scope=public, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 22 08:37:43 np0005592159 systemd[1]: Started libpod-conmon-20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1.scope.
Jan 22 08:37:43 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:37:43 np0005592159 podman[82737]: 2026-01-22 13:37:43.24804461 +0000 UTC m=+3.354959327 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 22 08:37:43 np0005592159 podman[82737]: 2026-01-22 13:37:43.328922334 +0000 UTC m=+3.435837051 container init 20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_blackburn, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9)
Jan 22 08:37:43 np0005592159 podman[82737]: 2026-01-22 13:37:43.337732573 +0000 UTC m=+3.444647260 container start 20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_blackburn, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.tags=Ceph keepalived, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, release=1793, description=keepalived for Ceph, io.openshift.expose-services=, build-date=2023-02-22T09:23:20)
Jan 22 08:37:43 np0005592159 podman[82737]: 2026-01-22 13:37:43.341809713 +0000 UTC m=+3.448724430 container attach 20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_blackburn, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, release=1793, name=keepalived, io.openshift.tags=Ceph keepalived, version=2.2.4, architecture=x86_64, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, distribution-scope=public, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 22 08:37:43 np0005592159 ecstatic_blackburn[82833]: 0 0
Jan 22 08:37:43 np0005592159 systemd[1]: libpod-20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1.scope: Deactivated successfully.
Jan 22 08:37:43 np0005592159 conmon[82833]: conmon 20c76435c0199d02c263 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1.scope/container/memory.events
Jan 22 08:37:43 np0005592159 podman[82737]: 2026-01-22 13:37:43.345712558 +0000 UTC m=+3.452627245 container died 20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_blackburn, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, distribution-scope=public, vcs-type=git, release=1793, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, vendor=Red Hat, Inc.)
Jan 22 08:37:43 np0005592159 systemd[1]: var-lib-containers-storage-overlay-07f1cae9fbe0e12c5bc10793688baa22e59996ef0636b0428cf808f6c4a4d983-merged.mount: Deactivated successfully.
Jan 22 08:37:43 np0005592159 podman[82737]: 2026-01-22 13:37:43.381490535 +0000 UTC m=+3.488405222 container remove 20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1 (image=quay.io/ceph/keepalived:2.2.4, name=ecstatic_blackburn, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, vendor=Red Hat, Inc., version=2.2.4, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, architecture=x86_64, name=keepalived)
Jan 22 08:37:43 np0005592159 systemd[1]: libpod-conmon-20c76435c0199d02c263e5cfc7af08f863aaf5d4e41b692fefdeef0310e732f1.scope: Deactivated successfully.
Jan 22 08:37:43 np0005592159 systemd[1]: Reloading.
Jan 22 08:37:43 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:37:43 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:37:43 np0005592159 systemd[1]: Reloading.
Jan 22 08:37:43 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:37:43 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:37:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:43.919+0000 7f47f8ed4640 -1 osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:43 np0005592159 ceph-osd[79779]: osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:43 np0005592159 systemd[1]: Starting Ceph keepalived.rgw.default.compute-2.xbsrtt for 088fe176-0106-5401-803c-2da38b73b76a...
Jan 22 08:37:44 np0005592159 podman[82977]: 2026-01-22 13:37:44.193479316 +0000 UTC m=+0.039285602 container create 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, architecture=x86_64, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2)
Jan 22 08:37:44 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a358ca8d9286b2c87ed8309fad35a1ad1ec5603e0132fed2f4d7473a5334162f/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:37:44 np0005592159 podman[82977]: 2026-01-22 13:37:44.249709696 +0000 UTC m=+0.095516002 container init 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.buildah.version=1.28.2, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-type=git, release=1793, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vendor=Red Hat, Inc., version=2.2.4)
Jan 22 08:37:44 np0005592159 podman[82977]: 2026-01-22 13:37:44.254955858 +0000 UTC m=+0.100762144 container start 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, name=keepalived, release=1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.openshift.tags=Ceph keepalived, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, architecture=x86_64, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20)
Jan 22 08:37:44 np0005592159 bash[82977]: 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4
Jan 22 08:37:44 np0005592159 podman[82977]: 2026-01-22 13:37:44.175198702 +0000 UTC m=+0.021005008 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 22 08:37:44 np0005592159 systemd[1]: Started Ceph keepalived.rgw.default.compute-2.xbsrtt for 088fe176-0106-5401-803c-2da38b73b76a.
Jan 22 08:37:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 22 08:37:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 22 08:37:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 22 08:37:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 22 08:37:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 22 08:37:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: Starting VRRP child process, pid=4
Jan 22 08:37:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: Startup complete
Jan 22 08:37:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: (VI_0) Entering BACKUP STATE (init)
Jan 22 08:37:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:44 2026: VRRP_Script(check_backend) succeeded
Jan 22 08:37:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:44.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e73 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:44.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:44.889+0000 7f47f8ed4640 -1 osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:44 np0005592159 ceph-osd[79779]: osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:45.927+0000 7f47f8ed4640 -1 osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:45 np0005592159 ceph-osd[79779]: osd.2 73 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:46 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 52 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:46 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e74 e74: 3 total, 3 up, 3 in
Jan 22 08:37:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:46.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:46.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:46.967+0000 7f47f8ed4640 -1 osd.2 74 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:46 np0005592159 ceph-osd[79779]: osd.2 74 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1", "id": [0, 1]}]: dispatch
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.12", "id": [0, 1]}]: dispatch
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e75 e75: 3 total, 3 up, 3 in
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e75 crush map has features 3314933000854323200, adjusting msgr requires
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e75 crush map has features 432629239337189376, adjusting msgr requires
Jan 22 08:37:47 np0005592159 ceph-osd[79779]: osd.2 75 crush map has features 432629239337189376, adjusting msgr requires for clients
Jan 22 08:37:47 np0005592159 ceph-osd[79779]: osd.2 75 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Jan 22 08:37:47 np0005592159 ceph-osd[79779]: osd.2 75 crush map has features 3314933000854323200, adjusting msgr requires for osds
Jan 22 08:37:47 np0005592159 podman[83271]: 2026-01-22 13:37:47.477805243 +0000 UTC m=+0.104746662 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Jan 22 08:37:47 np0005592159 podman[83271]: 2026-01-22 13:37:47.785547568 +0000 UTC m=+0.412489017 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:37:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e76 e76: 3 total, 3 up, 3 in
Jan 22 08:37:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:47 2026: (VI_0) Entering MASTER STATE
Jan 22 08:37:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:47 2026: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Jan 22 08:37:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt[82992]: Thu Jan 22 13:37:47 2026: (VI_0) Entering BACKUP STATE
Jan 22 08:37:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:47.962+0000 7f47f8ed4640 -1 osd.2 76 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:47 np0005592159 ceph-osd[79779]: osd.2 76 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:48.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:48 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.1", "id": [0, 1]}]': finished
Jan 22 08:37:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.12", "id": [0, 1]}]': finished
Jan 22 08:37:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 22 08:37:48 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 57 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 22 08:37:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:48 np0005592159 podman[83427]: 2026-01-22 13:37:48.790415042 +0000 UTC m=+0.321447448 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:37:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:37:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:48.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:37:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:48.991+0000 7f47f8ed4640 -1 osd.2 76 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:48 np0005592159 ceph-osd[79779]: osd.2 76 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:49 np0005592159 podman[83427]: 2026-01-22 13:37:49.357799313 +0000 UTC m=+0.888831699 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:37:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:49 np0005592159 podman[83493]: 2026-01-22 13:37:49.759643582 +0000 UTC m=+0.160746615 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, architecture=x86_64, distribution-scope=public, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, vcs-type=git, release=1793, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 22 08:37:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:50.019+0000 7f47f8ed4640 -1 osd.2 76 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:50 np0005592159 ceph-osd[79779]: osd.2 76 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 22 08:37:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 22 08:37:50 np0005592159 podman[83493]: 2026-01-22 13:37:50.061987611 +0000 UTC m=+0.463090574 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vcs-type=git, version=2.2.4, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, architecture=x86_64)
Jan 22 08:37:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:50.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e77 e77: 3 total, 3 up, 3 in
Jan 22 08:37:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:50.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:51.000+0000 7f47f8ed4640 -1 osd.2 77 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:51 np0005592159 ceph-osd[79779]: osd.2 77 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 22 08:37:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 22 08:37:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e78 e78: 3 total, 3 up, 3 in
Jan 22 08:37:51 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 78 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:51 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 78 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=78) [2] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:52.002+0000 7f47f8ed4640 -1 osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:52 np0005592159 ceph-osd[79779]: osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 22 08:37:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 22 08:37:52 np0005592159 podman[83799]: 2026-01-22 13:37:51.945529967 +0000 UTC m=+0.023905707 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:37:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 08:37:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:37:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:52.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:52.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 22 08:37:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:52.970+0000 7f47f8ed4640 -1 osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:52 np0005592159 ceph-osd[79779]: osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:54.006+0000 7f47f8ed4640 -1 osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:54 np0005592159 ceph-osd[79779]: osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 22 08:37:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:54.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:54.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:54.997+0000 7f47f8ed4640 -1 osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:54 np0005592159 ceph-osd[79779]: osd.2 78 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:55 np0005592159 systemd-logind[787]: New session 34 of user zuul.
Jan 22 08:37:55 np0005592159 systemd[1]: Started Session 34 of User zuul.
Jan 22 08:37:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 22 08:37:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 22 08:37:55 np0005592159 podman[83799]: 2026-01-22 13:37:55.807864113 +0000 UTC m=+3.886239823 container create bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Jan 22 08:37:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:37:55 np0005592159 python3.9[83969]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 22 08:37:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e79 e79: 3 total, 3 up, 3 in
Jan 22 08:37:56 np0005592159 systemd[1]: Started libpod-conmon-bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c.scope.
Jan 22 08:37:56 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:37:56 np0005592159 podman[83799]: 2026-01-22 13:37:56.261297386 +0000 UTC m=+4.339673106 container init bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:37:56 np0005592159 podman[83799]: 2026-01-22 13:37:56.270235497 +0000 UTC m=+4.348611237 container start bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_allen, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:37:56 np0005592159 brave_allen[83980]: 167 167
Jan 22 08:37:56 np0005592159 systemd[1]: libpod-bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c.scope: Deactivated successfully.
Jan 22 08:37:56 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 08:37:56 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 22 08:37:56 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 62 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:37:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:56.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:56 np0005592159 podman[83799]: 2026-01-22 13:37:56.598129068 +0000 UTC m=+4.676504778 container attach bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_allen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:37:56 np0005592159 podman[83799]: 2026-01-22 13:37:56.598928419 +0000 UTC m=+4.677304129 container died bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_allen, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:37:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:56.795+0000 7f47f8ed4640 -1 osd.2 79 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:56 np0005592159 ceph-osd[79779]: osd.2 79 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:37:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:56.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:37:57 np0005592159 python3.9[84161]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:37:57 np0005592159 systemd[1]: var-lib-containers-storage-overlay-f69366e925332ba90e232e1f47aae5c36a924131bfc9a785f975f41f6d41b78e-merged.mount: Deactivated successfully.
Jan 22 08:37:57 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e80 e80: 3 total, 3 up, 3 in
Jan 22 08:37:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:57.749+0000 7f47f8ed4640 -1 osd.2 79 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:57 np0005592159 ceph-osd[79779]: osd.2 79 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:57 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 80 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2] r=0 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:57 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 80 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2] r=0 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:37:57 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 80 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:57 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 80 pg[9.18( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:57 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 80 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:37:57 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 80 pg[9.8( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=80) [2]/[0] r=-1 lpr=80 pi=[59,80)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:37:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:37:58.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:58 np0005592159 podman[83799]: 2026-01-22 13:37:58.42530357 +0000 UTC m=+6.503679280 container remove bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:37:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 22 08:37:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 22 08:37:58 np0005592159 systemd[1]: libpod-conmon-bda3d790ccd3469ec473f2a0a250b7c8f03d48bd574a001f9cca69696671268c.scope: Deactivated successfully.
Jan 22 08:37:58 np0005592159 python3.9[84318]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:37:58 np0005592159 podman[84326]: 2026-01-22 13:37:58.564856671 +0000 UTC m=+0.027373651 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:37:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:58.744+0000 7f47f8ed4640 -1 osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:58 np0005592159 ceph-osd[79779]: osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 22 08:37:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:37:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:37:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:37:58.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:37:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 22 08:37:59 np0005592159 podman[84326]: 2026-01-22 13:37:59.160785585 +0000 UTC m=+0.623302575 container create 9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 08:37:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:37:59.706+0000 7f47f8ed4640 -1 osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:59 np0005592159 ceph-osd[79779]: osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:37:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:37:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 22 08:37:59 np0005592159 python3.9[84494]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:38:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:00.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:00 np0005592159 ceph-osd[79779]: osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:00.706+0000 7f47f8ed4640 -1 osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 22 08:38:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 22 08:38:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 22 08:38:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:00.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:01 np0005592159 python3.9[84649]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:38:01 np0005592159 systemd[1]: Started libpod-conmon-9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96.scope.
Jan 22 08:38:01 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:38:01 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4993fb55f0c3a020f4df627b9175d092bd43099da41cba2690a216fade42f332/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:01 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4993fb55f0c3a020f4df627b9175d092bd43099da41cba2690a216fade42f332/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:01 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4993fb55f0c3a020f4df627b9175d092bd43099da41cba2690a216fade42f332/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:01 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4993fb55f0c3a020f4df627b9175d092bd43099da41cba2690a216fade42f332/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 08:38:01 np0005592159 ceph-osd[79779]: osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:01.731+0000 7f47f8ed4640 -1 osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:02.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:02 np0005592159 python3.9[84806]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:38:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e81 e81: 3 total, 3 up, 3 in
Jan 22 08:38:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:02.763+0000 7f47f8ed4640 -1 osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:02 np0005592159 ceph-osd[79779]: osd.2 80 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 22 08:38:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:02.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 22 08:38:03 np0005592159 ceph-mds[81154]: mds.beacon.cephfs.compute-2.zycvef missed beacon ack from the monitors
Jan 22 08:38:03 np0005592159 python3.9[84957]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:38:03 np0005592159 podman[84326]: 2026-01-22 13:38:03.47274235 +0000 UTC m=+4.935259310 container init 9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:38:03 np0005592159 podman[84326]: 2026-01-22 13:38:03.483433809 +0000 UTC m=+4.945950759 container start 9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 08:38:03 np0005592159 network[84976]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:38:03 np0005592159 network[84977]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:38:03 np0005592159 network[84978]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:38:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:03 np0005592159 podman[84326]: 2026-01-22 13:38:03.780039164 +0000 UTC m=+5.242556124 container attach 9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Jan 22 08:38:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:03.780+0000 7f47f8ed4640 -1 osd.2 81 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:03 np0005592159 ceph-osd[79779]: osd.2 81 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:03 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 22 08:38:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:04.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e82 e82: 3 total, 3 up, 3 in
Jan 22 08:38:04 np0005592159 charming_albattani[84775]: [
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:    {
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:        "available": false,
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:        "ceph_device": false,
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:        "lsm_data": {},
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:        "lvs": [],
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:        "path": "/dev/sr0",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:        "rejected_reasons": [
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "Has a FileSystem",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "Insufficient space (<5GB)"
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:        ],
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:        "sys_api": {
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "actuators": null,
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "device_nodes": "sr0",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "devname": "sr0",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "human_readable_size": "482.00 KB",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "id_bus": "ata",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "model": "QEMU DVD-ROM",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "nr_requests": "2",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "parent": "/dev/sr0",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "partitions": {},
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "path": "/dev/sr0",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "removable": "1",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "rev": "2.5+",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "ro": "0",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "rotational": "1",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "sas_address": "",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "sas_device_handle": "",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "scheduler_mode": "mq-deadline",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "sectors": 0,
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "sectorsize": "2048",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "size": 493568.0,
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "support_discard": "2048",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "type": "disk",
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:            "vendor": "QEMU"
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:        }
Jan 22 08:38:04 np0005592159 charming_albattani[84775]:    }
Jan 22 08:38:04 np0005592159 charming_albattani[84775]: ]
Jan 22 08:38:04 np0005592159 podman[84326]: 2026-01-22 13:38:04.782777678 +0000 UTC m=+6.245294628 container died 9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_albattani, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Jan 22 08:38:04 np0005592159 systemd[1]: libpod-9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96.scope: Deactivated successfully.
Jan 22 08:38:04 np0005592159 systemd[1]: libpod-9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96.scope: Consumed 1.299s CPU time.
Jan 22 08:38:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:04.799+0000 7f47f8ed4640 -1 osd.2 81 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:04 np0005592159 ceph-osd[79779]: osd.2 81 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:04.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:05.819+0000 7f47f8ed4640 -1 osd.2 81 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:05 np0005592159 ceph-osd[79779]: osd.2 81 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:06 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 82 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:06 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 82 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:06 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 82 pg[9.19( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:06 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 82 pg[9.9( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=59/59 les/c/f=60/60/0 sis=82) [2]/[0] r=-1 lpr=82 pi=[59,82)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:06 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 67 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:06.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:06.797+0000 7f47f8ed4640 -1 osd.2 82 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:06 np0005592159 ceph-osd[79779]: osd.2 82 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:06 np0005592159 systemd[1]: var-lib-containers-storage-overlay-4993fb55f0c3a020f4df627b9175d092bd43099da41cba2690a216fade42f332-merged.mount: Deactivated successfully.
Jan 22 08:38:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:06.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e83 e83: 3 total, 3 up, 3 in
Jan 22 08:38:07 np0005592159 ceph-osd[79779]: osd.2 82 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:07.777+0000 7f47f8ed4640 -1 osd.2 82 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:08.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:08 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 74 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:08 np0005592159 podman[84326]: 2026-01-22 13:38:08.54996829 +0000 UTC m=+10.012485240 container remove 9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:38:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:08 np0005592159 systemd[1]: libpod-conmon-9a72a1d86ef96968bc86eb90a264cca7bf8608c172a5e7e3c07bf60984d99a96.scope: Deactivated successfully.
Jan 22 08:38:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e84 e84: 3 total, 3 up, 3 in
Jan 22 08:38:08 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 84 pg[9.18( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=84) [2] r=0 lpr=84 pi=[59,84)/1 luod=0'0 crt=58'684 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:08 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 84 pg[9.18( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=84) [2] r=0 lpr=84 pi=[59,84)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:08.788+0000 7f47f8ed4640 -1 osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:08 np0005592159 ceph-osd[79779]: osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:08.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:09.782+0000 7f47f8ed4640 -1 osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:09 np0005592159 ceph-osd[79779]: osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:09 np0005592159 python3.9[86525]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:38:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:10.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:10 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:10 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:10 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:10 np0005592159 python3.9[86984]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:38:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:10.829+0000 7f47f8ed4640 -1 osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:10 np0005592159 ceph-osd[79779]: osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:10.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:11.796+0000 7f47f8ed4640 -1 osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:11 np0005592159 ceph-osd[79779]: osd.2 84 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:12 np0005592159 python3.9[87578]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:38:12 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e85 e85: 3 total, 3 up, 3 in
Jan 22 08:38:12 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 85 pg[9.8( v 61'693 (0'0,61'693] local-lis/les=0/0 n=5 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=85) [2] r=0 lpr=85 pi=[59,85)/1 luod=0'0 crt=61'693 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:12 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 85 pg[9.8( v 61'693 (0'0,61'693] local-lis/les=0/0 n=5 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=85) [2] r=0 lpr=85 pi=[59,85)/1 crt=61'693 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:12.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:12.823+0000 7f47f8ed4640 -1 osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:12 np0005592159 ceph-osd[79779]: osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:12.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:13 np0005592159 python3.9[87737]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:38:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 08:38:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:38:13 np0005592159 ceph-mon[77081]: Updating compute-0:/etc/ceph/ceph.conf
Jan 22 08:38:13 np0005592159 ceph-mon[77081]: Updating compute-1:/etc/ceph/ceph.conf
Jan 22 08:38:13 np0005592159 ceph-mon[77081]: Updating compute-2:/etc/ceph/ceph.conf
Jan 22 08:38:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 85 pg[9.18( v 58'684 (0'0,58'684] local-lis/les=84/85 n=4 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=84) [2] r=0 lpr=84 pi=[59,84)/1 crt=58'684 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:13.777+0000 7f47f8ed4640 -1 osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:13 np0005592159 ceph-osd[79779]: osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:14.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:14.750+0000 7f47f8ed4640 -1 osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:14 np0005592159 ceph-osd[79779]: osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:14.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:15 np0005592159 python3.9[87821]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:38:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:15.743+0000 7f47f8ed4640 -1 osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:15 np0005592159 ceph-osd[79779]: osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:16.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e86 e86: 3 total, 3 up, 3 in
Jan 22 08:38:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:16 np0005592159 ceph-mon[77081]: Updating compute-2:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:16 np0005592159 ceph-mon[77081]: Updating compute-1:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:16 np0005592159 ceph-mon[77081]: Updating compute-0:/var/lib/ceph/088fe176-0106-5401-803c-2da38b73b76a/config/ceph.conf
Jan 22 08:38:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:16 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 79 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:16.700+0000 7f47f8ed4640 -1 osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:16 np0005592159 ceph-osd[79779]: osd.2 85 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:16 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 86 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=0/0 n=6 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 luod=0'0 crt=62'705 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:16 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 86 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=0/0 n=6 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=62'705 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:16.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:17 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 86 pg[9.8( v 61'693 (0'0,61'693] local-lis/les=85/86 n=5 ec=59/49 lis/c=80/59 les/c/f=81/60/0 sis=85) [2] r=0 lpr=85 pi=[59,85)/1 crt=61'693 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 22 08:38:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:17.730+0000 7f47f8ed4640 -1 osd.2 86 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:17 np0005592159 ceph-osd[79779]: osd.2 86 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:18.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 22 08:38:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e87 e87: 3 total, 3 up, 3 in
Jan 22 08:38:18 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:18 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:18 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:18 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:18 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:18 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:18.762+0000 7f47f8ed4640 -1 osd.2 86 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:18 np0005592159 ceph-osd[79779]: osd.2 86 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:18 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 87 pg[9.9( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=87) [2] r=0 lpr=87 pi=[59,87)/1 luod=0'0 crt=58'684 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:18 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 87 pg[9.9( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=87) [2] r=0 lpr=87 pi=[59,87)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:18.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:19 np0005592159 ceph-osd[79779]: osd.2 87 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:19.794+0000 7f47f8ed4640 -1 osd.2 87 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 22 08:38:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 22 08:38:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:20.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:20 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 83 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:38:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e88 e88: 3 total, 3 up, 3 in
Jan 22 08:38:20 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 88 pg[9.9( v 58'684 (0'0,58'684] local-lis/les=87/88 n=4 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=87) [2] r=0 lpr=87 pi=[59,87)/1 crt=58'684 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:20 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 88 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=86/88 n=6 ec=59/49 lis/c=82/59 les/c/f=83/60/0 sis=86) [2] r=0 lpr=86 pi=[59,86)/1 crt=62'705 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:20.831+0000 7f47f8ed4640 -1 osd.2 88 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:20 np0005592159 ceph-osd[79779]: osd.2 88 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:20.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:21 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:21 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:21.875+0000 7f47f8ed4640 -1 osd.2 88 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:21 np0005592159 ceph-osd[79779]: osd.2 88 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e89 e89: 3 total, 3 up, 3 in
Jan 22 08:38:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:22.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:22 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 22 08:38:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:22 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 22 08:38:22 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e90 e90: 3 total, 3 up, 3 in
Jan 22 08:38:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:22.897+0000 7f47f8ed4640 -1 osd.2 90 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:22 np0005592159 ceph-osd[79779]: osd.2 90 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:22.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:23.945+0000 7f47f8ed4640 -1 osd.2 90 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:23 np0005592159 ceph-osd[79779]: osd.2 90 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:24 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 93 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:24 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 22 08:38:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e91 e91: 3 total, 3 up, 3 in
Jan 22 08:38:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:38:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:24.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:38:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:24.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:24.981+0000 7f47f8ed4640 -1 osd.2 91 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:24 np0005592159 ceph-osd[79779]: osd.2 91 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:25 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 22 08:38:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e92 e92: 3 total, 3 up, 3 in
Jan 22 08:38:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:25.993+0000 7f47f8ed4640 -1 osd.2 92 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:25 np0005592159 ceph-osd[79779]: osd.2 92 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.16 deep-scrub starts
Jan 22 08:38:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.16 deep-scrub ok
Jan 22 08:38:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e93 e93: 3 total, 3 up, 3 in
Jan 22 08:38:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 22 08:38:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:26.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:26.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:27.005+0000 7f47f8ed4640 -1 osd.2 93 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:27 np0005592159 ceph-osd[79779]: osd.2 93 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:27 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 22 08:38:27 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:38:27 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:27.964+0000 7f47f8ed4640 -1 osd.2 93 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:27 np0005592159 ceph-osd[79779]: osd.2 93 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:28.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e94 e94: 3 total, 3 up, 3 in
Jan 22 08:38:28 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 94 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=72/73 n=7 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=94 pruub=9.979992867s) [1] r=-1 lpr=94 pi=[72,94)/1 crt=62'705 mlcod 0'0 active pruub 127.301765442s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:28 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 94 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=72/73 n=7 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=94 pruub=9.979891777s) [1] r=-1 lpr=94 pi=[72,94)/1 crt=62'705 mlcod 0'0 unknown NOTIFY pruub 127.301765442s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:28 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 94 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=94 pruub=9.984266281s) [1] r=-1 lpr=94 pi=[72,94)/1 crt=62'695 mlcod 0'0 active pruub 127.307731628s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:28 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 94 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=94 pruub=9.984044075s) [1] r=-1 lpr=94 pi=[72,94)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 127.307731628s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:28 np0005592159 ceph-mon[77081]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 22 08:38:28 np0005592159 ceph-mon[77081]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 22 08:38:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 22 08:38:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.nyayzk", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 22 08:38:28 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:38:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:28.948+0000 7f47f8ed4640 -1 osd.2 94 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:28 np0005592159 ceph-osd[79779]: osd.2 94 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:29.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:29.992+0000 7f47f8ed4640 -1 osd.2 94 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:29 np0005592159 ceph-osd[79779]: osd.2 94 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:30.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:30 np0005592159 ceph-mon[77081]: Reconfiguring mgr.compute-0.nyayzk (monmap changed)...
Jan 22 08:38:30 np0005592159 ceph-mon[77081]: Reconfiguring daemon mgr.compute-0.nyayzk on compute-0
Jan 22 08:38:30 np0005592159 ceph-mon[77081]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 22 08:38:30 np0005592159 ceph-mon[77081]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 22 08:38:30 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 98 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:30 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 22 08:38:30 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:30 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 22 08:38:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:31.003+0000 7f47f8ed4640 -1 osd.2 94 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:31 np0005592159 ceph-osd[79779]: osd.2 94 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e95 e95: 3 total, 3 up, 3 in
Jan 22 08:38:31 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 95 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=72/73 n=7 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=0 lpr=95 pi=[72,95)/1 crt=62'705 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:31 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 95 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=72/73 n=7 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=0 lpr=95 pi=[72,95)/1 crt=62'705 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:31 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 95 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=0 lpr=95 pi=[72,95)/1 crt=62'695 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:31 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 95 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] r=0 lpr=95 pi=[72,95)/1 crt=62'695 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:31.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:32.034+0000 7f47f8ed4640 -1 osd.2 95 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:32 np0005592159 ceph-osd[79779]: osd.2 95 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:32 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 22 08:38:32 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:32 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:32 np0005592159 ceph-mon[77081]: Reconfiguring osd.0 (monmap changed)...
Jan 22 08:38:32 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 22 08:38:32 np0005592159 ceph-mon[77081]: Reconfiguring daemon osd.0 on compute-0
Jan 22 08:38:32 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 22 08:38:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:32.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:32 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e96 e96: 3 total, 3 up, 3 in
Jan 22 08:38:32 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 96 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=70/71 n=5 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=96 pruub=11.439438820s) [1] r=-1 lpr=96 pi=[70,96)/1 crt=62'695 mlcod 0'0 active pruub 133.065887451s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:32 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 96 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=70/71 n=5 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=96 pruub=11.439373016s) [1] r=-1 lpr=96 pi=[70,96)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 133.065887451s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:32 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 96 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=70/71 n=7 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=96 pruub=11.438467979s) [1] r=-1 lpr=96 pi=[70,96)/1 crt=62'704 mlcod 0'0 active pruub 133.065811157s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:32 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 96 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=70/71 n=7 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=96 pruub=11.438288689s) [1] r=-1 lpr=96 pi=[70,96)/1 crt=62'704 mlcod 0'0 unknown NOTIFY pruub 133.065811157s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:32 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 22 08:38:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:33.070+0000 7f47f8ed4640 -1 osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:33 np0005592159 ceph-osd[79779]: osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:33.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:34.090+0000 7f47f8ed4640 -1 osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:34 np0005592159 ceph-osd[79779]: osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 22 08:38:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 22 08:38:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:34.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:35.112+0000 7f47f8ed4640 -1 osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:35 np0005592159 ceph-osd[79779]: osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:35.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:36.094+0000 7f47f8ed4640 -1 osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:36 np0005592159 ceph-osd[79779]: osd.2 96 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e96 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e97 e97: 3 total, 3 up, 3 in
Jan 22 08:38:36 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 97 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=70/71 n=5 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=0 lpr=97 pi=[70,97)/1 crt=62'695 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:36 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 97 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=70/71 n=5 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=0 lpr=97 pi=[70,97)/1 crt=62'695 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:36 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 97 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=70/71 n=7 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=0 lpr=97 pi=[70,97)/1 crt=62'704 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:36 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 97 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=70/71 n=7 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] r=0 lpr=97 pi=[70,97)/1 crt=62'704 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:38:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:36 np0005592159 ceph-mon[77081]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 22 08:38:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 22 08:38:36 np0005592159 ceph-mon[77081]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 22 08:38:36 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 97 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=95/97 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] async=[1] r=0 lpr=95 pi=[72,95)/1 crt=62'695 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:36 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 97 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=95/97 n=7 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=95) [1]/[2] async=[1] r=0 lpr=95 pi=[72,95)/1 crt=62'705 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:36.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:37.081+0000 7f47f8ed4640 -1 osd.2 97 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:37 np0005592159 ceph-osd[79779]: osd.2 97 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:37 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e98 e98: 3 total, 3 up, 3 in
Jan 22 08:38:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:37 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592159 ceph-mon[77081]: Reconfiguring osd.1 (monmap changed)...
Jan 22 08:38:37 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 22 08:38:37 np0005592159 ceph-mon[77081]: Reconfiguring daemon osd.1 on compute-1
Jan 22 08:38:37 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:37 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:38:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 98 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=95/97 n=7 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98 pruub=14.930242538s) [1] async=[1] r=-1 lpr=98 pi=[72,98)/1 crt=62'705 mlcod 62'705 active pruub 141.042510986s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 98 pg[9.d( v 62'705 (0'0,62'705] local-lis/les=95/97 n=7 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98 pruub=14.930095673s) [1] r=-1 lpr=98 pi=[72,98)/1 crt=62'705 mlcod 0'0 unknown NOTIFY pruub 141.042510986s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 98 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=95/97 n=5 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98 pruub=14.923345566s) [1] async=[1] r=-1 lpr=98 pi=[72,98)/1 crt=62'695 mlcod 62'695 active pruub 141.037170410s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 98 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=95/97 n=5 ec=59/49 lis/c=95/72 les/c/f=97/73/0 sis=98 pruub=14.923262596s) [1] r=-1 lpr=98 pi=[72,98)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 141.037170410s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 98 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=97/98 n=5 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] async=[1] r=0 lpr=97 pi=[70,97)/1 crt=62'695 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 98 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=97/98 n=7 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=97) [1]/[2] async=[1] r=0 lpr=97 pi=[70,97)/1 crt=62'704 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:38:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:37.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:37 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e99 e99: 3 total, 3 up, 3 in
Jan 22 08:38:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 99 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=97/98 n=7 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99 pruub=15.695192337s) [1] async=[1] r=-1 lpr=99 pi=[70,99)/1 crt=62'704 mlcod 62'704 active pruub 142.123626709s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 99 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=97/98 n=5 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99 pruub=15.694999695s) [1] async=[1] r=-1 lpr=99 pi=[70,99)/1 crt=62'695 mlcod 62'695 active pruub 142.123489380s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:38:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 99 pg[9.f( v 62'704 (0'0,62'704] local-lis/les=97/98 n=7 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99 pruub=15.695041656s) [1] r=-1 lpr=99 pi=[70,99)/1 crt=62'704 mlcod 0'0 unknown NOTIFY pruub 142.123626709s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:37 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 99 pg[9.1f( v 62'695 (0'0,62'695] local-lis/les=97/98 n=5 ec=59/49 lis/c=97/70 les/c/f=98/71/0 sis=99 pruub=15.694853783s) [1] r=-1 lpr=99 pi=[70,99)/1 crt=62'695 mlcod 0'0 unknown NOTIFY pruub 142.123489380s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:38:38 np0005592159 ceph-osd[79779]: osd.2 99 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:38.077+0000 7f47f8ed4640 -1 osd.2 99 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:38 np0005592159 ceph-mon[77081]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 22 08:38:38 np0005592159 ceph-mon[77081]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 22 08:38:38 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:38 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:38 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:38 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 22 08:38:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:38 np0005592159 podman[88157]: 2026-01-22 13:38:38.328139952 +0000 UTC m=+0.060140082 container create 5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goodall, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 08:38:38 np0005592159 systemd[72610]: Created slice User Background Tasks Slice.
Jan 22 08:38:38 np0005592159 systemd[72610]: Starting Cleanup of User's Temporary Files and Directories...
Jan 22 08:38:38 np0005592159 systemd[1]: Started libpod-conmon-5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2.scope.
Jan 22 08:38:38 np0005592159 systemd[72610]: Finished Cleanup of User's Temporary Files and Directories.
Jan 22 08:38:38 np0005592159 podman[88157]: 2026-01-22 13:38:38.294961088 +0000 UTC m=+0.026961228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 08:38:38 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:38:38 np0005592159 podman[88157]: 2026-01-22 13:38:38.414216623 +0000 UTC m=+0.146216763 container init 5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 08:38:38 np0005592159 podman[88157]: 2026-01-22 13:38:38.424264604 +0000 UTC m=+0.156264754 container start 5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goodall, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 22 08:38:38 np0005592159 podman[88157]: 2026-01-22 13:38:38.428132968 +0000 UTC m=+0.160133098 container attach 5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goodall, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 08:38:38 np0005592159 lucid_goodall[88175]: 167 167
Jan 22 08:38:38 np0005592159 systemd[1]: libpod-5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2.scope: Deactivated successfully.
Jan 22 08:38:38 np0005592159 podman[88157]: 2026-01-22 13:38:38.431756926 +0000 UTC m=+0.163757056 container died 5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goodall, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:38:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:38.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:38 np0005592159 systemd[1]: var-lib-containers-storage-overlay-51cf3b20bd3f08c12ca9ec3d431bee0bcc4aa1b871af0f8879d943dd97fb9da3-merged.mount: Deactivated successfully.
Jan 22 08:38:38 np0005592159 podman[88157]: 2026-01-22 13:38:38.485383902 +0000 UTC m=+0.217384022 container remove 5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 22 08:38:38 np0005592159 systemd[1]: libpod-conmon-5ce535ed9fe251e0fcc09147310e04067c008e0bc0c27975464e35301bd482c2.scope: Deactivated successfully.
Jan 22 08:38:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e100 e100: 3 total, 3 up, 3 in
Jan 22 08:38:39 np0005592159 ceph-osd[79779]: osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.11 deep-scrub starts
Jan 22 08:38:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:39.110+0000 7f47f8ed4640 -1 osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.11 deep-scrub ok
Jan 22 08:38:39 np0005592159 ceph-mon[77081]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 22 08:38:39 np0005592159 ceph-mon[77081]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 22 08:38:39 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:39 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:39 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:39.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:39 np0005592159 podman[88373]: 2026-01-22 13:38:39.555612496 +0000 UTC m=+0.204700360 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Jan 22 08:38:40 np0005592159 podman[88373]: 2026-01-22 13:38:40.024368124 +0000 UTC m=+0.673455968 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 08:38:40 np0005592159 ceph-osd[79779]: osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:40.133+0000 7f47f8ed4640 -1 osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:40.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:40 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:40 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:41 np0005592159 ceph-osd[79779]: osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:41.089+0000 7f47f8ed4640 -1 osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:41 np0005592159 podman[88527]: 2026-01-22 13:38:41.141552863 +0000 UTC m=+0.071719544 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:38:41 np0005592159 podman[88527]: 2026-01-22 13:38:41.151544403 +0000 UTC m=+0.081711064 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:38:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:41.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:41 np0005592159 podman[88589]: 2026-01-22 13:38:41.774450176 +0000 UTC m=+0.078378494 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, description=keepalived for Ceph, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vcs-type=git, io.buildah.version=1.28.2, architecture=x86_64, build-date=2023-02-22T09:23:20, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, name=keepalived)
Jan 22 08:38:41 np0005592159 podman[88589]: 2026-01-22 13:38:41.788697461 +0000 UTC m=+0.092625779 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, io.openshift.expose-services=, release=1793, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9)
Jan 22 08:38:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:38:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:38:42 np0005592159 ceph-osd[79779]: osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:42.068+0000 7f47f8ed4640 -1 osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:42.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:43 np0005592159 ceph-osd[79779]: osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:43.030+0000 7f47f8ed4640 -1 osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:43 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:43 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:43.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:43 np0005592159 ceph-osd[79779]: osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:43.985+0000 7f47f8ed4640 -1 osd.2 100 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:44.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 22 08:38:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e101 e101: 3 total, 3 up, 3 in
Jan 22 08:38:45 np0005592159 ceph-osd[79779]: osd.2 101 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.a deep-scrub starts
Jan 22 08:38:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:45.001+0000 7f47f8ed4640 -1 osd.2 101 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.a deep-scrub ok
Jan 22 08:38:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 22 08:38:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e102 e102: 3 total, 3 up, 3 in
Jan 22 08:38:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:45.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:46 np0005592159 ceph-osd[79779]: osd.2 102 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:46.012+0000 7f47f8ed4640 -1 osd.2 102 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:46.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:46 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 22 08:38:46 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e103 e103: 3 total, 3 up, 3 in
Jan 22 08:38:47 np0005592159 ceph-osd[79779]: osd.2 103 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:47.030+0000 7f47f8ed4640 -1 osd.2 103 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 22 08:38:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 22 08:38:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e104 e104: 3 total, 3 up, 3 in
Jan 22 08:38:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 22 08:38:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:47.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 22 08:38:48 np0005592159 ceph-osd[79779]: osd.2 104 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:48.068+0000 7f47f8ed4640 -1 osd.2 104 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 22 08:38:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:48.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e105 e105: 3 total, 3 up, 3 in
Jan 22 08:38:48 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:38:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 22 08:38:49 np0005592159 ceph-osd[79779]: osd.2 105 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:49.021+0000 7f47f8ed4640 -1 osd.2 105 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 22 08:38:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:49.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:50 np0005592159 ceph-osd[79779]: osd.2 105 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:50.018+0000 7f47f8ed4640 -1 osd.2 105 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:50.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:51 np0005592159 ceph-osd[79779]: osd.2 105 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:51.034+0000 7f47f8ed4640 -1 osd.2 105 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:51 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e106 e106: 3 total, 3 up, 3 in
Jan 22 08:38:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:51.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 22 08:38:52 np0005592159 ceph-osd[79779]: osd.2 106 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:52.016+0000 7f47f8ed4640 -1 osd.2 106 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 22 08:38:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:52.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e107 e107: 3 total, 3 up, 3 in
Jan 22 08:38:52 np0005592159 ceph-osd[79779]: osd.2 106 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:52.985+0000 7f47f8ed4640 -1 osd.2 106 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:53 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:53.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:53 np0005592159 ceph-osd[79779]: osd.2 107 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 22 08:38:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:53.972+0000 7f47f8ed4640 -1 osd.2 107 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 22 08:38:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:54.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:54 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 22 08:38:54 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e108 e108: 3 total, 3 up, 3 in
Jan 22 08:38:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:54.997+0000 7f47f8ed4640 -1 osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:55 np0005592159 ceph-osd[79779]: osd.2 108 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 22 08:38:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 22 08:38:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:55.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 22 08:38:55 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e109 e109: 3 total, 3 up, 3 in
Jan 22 08:38:56 np0005592159 ceph-osd[79779]: osd.2 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:56.012+0000 7f47f8ed4640 -1 osd.2 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:38:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:56.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:38:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:38:56 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 22 08:38:56 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 22 08:38:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:56 np0005592159 ceph-osd[79779]: osd.2 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:56.968+0000 7f47f8ed4640 -1 osd.2 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:57.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:57 np0005592159 ceph-osd[79779]: osd.2 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:57.948+0000 7f47f8ed4640 -1 osd.2 109 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e110 e110: 3 total, 3 up, 3 in
Jan 22 08:38:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 22 08:38:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:38:58.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:58 np0005592159 ceph-osd[79779]: osd.2 110 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:58.966+0000 7f47f8ed4640 -1 osd.2 110 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:59 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:38:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 22 08:38:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:38:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:38:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:38:59.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:38:59 np0005592159 ceph-osd[79779]: osd.2 110 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:38:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:38:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:38:59.944+0000 7f47f8ed4640 -1 osd.2 110 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e111 e111: 3 total, 3 up, 3 in
Jan 22 08:39:00 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 111 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=111 pruub=10.250331879s) [1] r=-1 lpr=111 pi=[72,111)/1 crt=62'690 mlcod 0'0 active pruub 159.308609009s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:00 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 111 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=111 pruub=10.250194550s) [1] r=-1 lpr=111 pi=[72,111)/1 crt=62'690 mlcod 0'0 unknown NOTIFY pruub 159.308609009s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 22 08:39:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:00.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:00 np0005592159 ceph-osd[79779]: osd.2 111 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:00.951+0000 7f47f8ed4640 -1 osd.2 111 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e112 e112: 3 total, 3 up, 3 in
Jan 22 08:39:01 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 112 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] r=0 lpr=112 pi=[72,112)/1 crt=62'690 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:01 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 112 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=72/73 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] r=0 lpr=112 pi=[72,112)/1 crt=62'690 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:01 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 22 08:39:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:01.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:01 np0005592159 ceph-osd[79779]: osd.2 112 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:01.922+0000 7f47f8ed4640 -1 osd.2 112 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e113 e113: 3 total, 3 up, 3 in
Jan 22 08:39:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 22 08:39:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:02.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:02 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 113 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=113) [2] r=0 lpr=113 pi=[77,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:02 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 113 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=112/113 n=5 ec=59/49 lis/c=72/72 les/c/f=73/73/0 sis=112) [1]/[2] async=[1] r=0 lpr=112 pi=[72,112)/1 crt=62'690 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e114 e114: 3 total, 3 up, 3 in
Jan 22 08:39:02 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 114 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[1] r=-1 lpr=114 pi=[77,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:02 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 114 pg[9.16( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=77/77 les/c/f=78/78/0 sis=114) [2]/[1] r=-1 lpr=114 pi=[77,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:02 np0005592159 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:02.886+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:03.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:03 np0005592159 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:03.922+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:04.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:04 np0005592159 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:04.919+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:39:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:05.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:39:05 np0005592159 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:05.913+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:06.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:06 np0005592159 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:06.883+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 22 08:39:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 22 08:39:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:07.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:07 np0005592159 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:07.889+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:08.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:08.848+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:08 np0005592159 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:09.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:09.870+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:09 np0005592159 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:10.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:10.827+0000 7f47f8ed4640 -1 osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:10 np0005592159 ceph-osd[79779]: osd.2 114 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:11 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 22 08:39:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2[77077]: 2026-01-22T13:39:11.216+0000 7f661ae92640 -1 mon.compute-2@1(peon).paxos(paxos updating c 1..711) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 2.505236149s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Jan 22 08:39:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).paxos(paxos updating c 1..711) lease_expire from mon.0 v2:192.168.122.100:3300/0 is 2.505236149s seconds in the past; mons are probably laggy (or possibly clocks are too skewed)
Jan 22 08:39:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e115 e115: 3 total, 3 up, 3 in
Jan 22 08:39:11 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 115 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=112/113 n=5 ec=59/49 lis/c=112/72 les/c/f=113/73/0 sis=115 pruub=15.244450569s) [1] async=[1] r=-1 lpr=115 pi=[72,115)/1 crt=62'690 mlcod 62'690 active pruub 175.391098022s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:11 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 115 pg[9.15( v 62'690 (0'0,62'690] local-lis/les=112/113 n=5 ec=59/49 lis/c=112/72 les/c/f=113/73/0 sis=115 pruub=15.244197845s) [1] r=-1 lpr=115 pi=[72,115)/1 crt=62'690 mlcod 0'0 unknown NOTIFY pruub 175.391098022s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:11 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:11.789+0000 7f47f8ed4640 -1 osd.2 115 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:11 np0005592159 ceph-osd[79779]: osd.2 115 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:11.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:12.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:12.780+0000 7f47f8ed4640 -1 osd.2 115 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:12 np0005592159 ceph-osd[79779]: osd.2 115 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:12 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e116 e116: 3 total, 3 up, 3 in
Jan 22 08:39:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 116 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=114/77 les/c/f=115/78/0 sis=116) [2] r=0 lpr=116 pi=[77,116)/1 luod=0'0 crt=58'684 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:13 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 116 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=0/0 n=4 ec=59/49 lis/c=114/77 les/c/f=115/78/0 sis=116) [2] r=0 lpr=116 pi=[77,116)/1 crt=58'684 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:13.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:13 np0005592159 ceph-osd[79779]: osd.2 116 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:13.797+0000 7f47f8ed4640 -1 osd.2 116 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.1f deep-scrub starts
Jan 22 08:39:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.1f deep-scrub ok
Jan 22 08:39:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:14.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:14 np0005592159 python3.9[88968]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:39:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:14.832+0000 7f47f8ed4640 -1 osd.2 116 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:14 np0005592159 ceph-osd[79779]: osd.2 116 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e117 e117: 3 total, 3 up, 3 in
Jan 22 08:39:15 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 117 pg[9.16( v 58'684 (0'0,58'684] local-lis/les=116/117 n=4 ec=59/49 lis/c=114/77 les/c/f=115/78/0 sis=116) [2] r=0 lpr=116 pi=[77,116)/1 crt=58'684 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:15 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:15.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:15.799+0000 7f47f8ed4640 -1 osd.2 117 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:15 np0005592159 ceph-osd[79779]: osd.2 117 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:16.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:16 np0005592159 python3.9[89255]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 22 08:39:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:16.765+0000 7f47f8ed4640 -1 osd.2 117 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:16 np0005592159 ceph-osd[79779]: osd.2 117 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:17 np0005592159 python3.9[89408]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 22 08:39:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:17.748+0000 7f47f8ed4640 -1 osd.2 117 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:17 np0005592159 ceph-osd[79779]: osd.2 117 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 22 08:39:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 22 08:39:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 08:39:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:17.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 08:39:17 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e118 e118: 3 total, 3 up, 3 in
Jan 22 08:39:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 22 08:39:17 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 144 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:18 np0005592159 python3.9[89560]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:39:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:18.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:18.770+0000 7f47f8ed4640 -1 osd.2 118 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:18 np0005592159 ceph-osd[79779]: osd.2 118 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 22 08:39:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 22 08:39:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 22 08:39:19 np0005592159 python3.9[89713]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 22 08:39:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:19.727+0000 7f47f8ed4640 -1 osd.2 118 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:19 np0005592159 ceph-osd[79779]: osd.2 118 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 22 08:39:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 22 08:39:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:19.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:20.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:20.772+0000 7f47f8ed4640 -1 osd.2 118 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:20 np0005592159 ceph-osd[79779]: osd.2 118 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e119 e119: 3 total, 3 up, 3 in
Jan 22 08:39:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 22 08:39:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:21 np0005592159 python3.9[89866]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:39:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:21.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:21.810+0000 7f47f8ed4640 -1 osd.2 119 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:21 np0005592159 ceph-osd[79779]: osd.2 119 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:22 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e120 e120: 3 total, 3 up, 3 in
Jan 22 08:39:22 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 120 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=86/88 n=6 ec=59/49 lis/c=86/86 les/c/f=88/88/0 sis=120 pruub=10.775048256s) [0] r=-1 lpr=120 pi=[86,120)/1 crt=62'705 mlcod 0'0 active pruub 181.590789795s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:22 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 120 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=86/88 n=6 ec=59/49 lis/c=86/86 les/c/f=88/88/0 sis=120 pruub=10.773756981s) [0] r=-1 lpr=120 pi=[86,120)/1 crt=62'705 mlcod 0'0 unknown NOTIFY pruub 181.590789795s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:22 np0005592159 python3.9[90018]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:39:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:22 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 22 08:39:22 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 22 08:39:22 np0005592159 python3.9[90097]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:39:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:22.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:22 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e121 e121: 3 total, 3 up, 3 in
Jan 22 08:39:22 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 121 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=86/88 n=6 ec=59/49 lis/c=86/86 les/c/f=88/88/0 sis=121) [0]/[2] r=0 lpr=121 pi=[86,121)/1 crt=62'705 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:22 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 121 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=86/88 n=6 ec=59/49 lis/c=86/86 les/c/f=88/88/0 sis=121) [0]/[2] r=0 lpr=121 pi=[86,121)/1 crt=62'705 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:22.835+0000 7f47f8ed4640 -1 osd.2 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:22 np0005592159 ceph-osd[79779]: osd.2 121 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.b deep-scrub starts
Jan 22 08:39:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.b deep-scrub ok
Jan 22 08:39:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:23 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 22 08:39:23 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 154 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e122 e122: 3 total, 3 up, 3 in
Jan 22 08:39:23 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 122 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=121/122 n=6 ec=59/49 lis/c=86/86 les/c/f=88/88/0 sis=121) [0]/[2] async=[0] r=0 lpr=121 pi=[86,121)/1 crt=62'705 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:23.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:23.874+0000 7f47f8ed4640 -1 osd.2 122 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:23 np0005592159 ceph-osd[79779]: osd.2 122 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 22 08:39:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 22 08:39:23 np0005592159 python3.9[90249]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:39:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:24 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 22 08:39:24 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 22 08:39:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:24.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:24.917+0000 7f47f8ed4640 -1 osd.2 122 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:24 np0005592159 ceph-osd[79779]: osd.2 122 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 22 08:39:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 22 08:39:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e123 e123: 3 total, 3 up, 3 in
Jan 22 08:39:25 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 123 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=121/122 n=6 ec=59/49 lis/c=121/86 les/c/f=122/88/0 sis=123 pruub=14.623571396s) [0] async=[0] r=-1 lpr=123 pi=[86,123)/1 crt=62'705 mlcod 62'705 active pruub 188.486572266s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:25 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 123 pg[9.19( v 62'705 (0'0,62'705] local-lis/les=121/122 n=6 ec=59/49 lis/c=121/86 les/c/f=122/88/0 sis=123 pruub=14.623458862s) [0] r=-1 lpr=123 pi=[86,123)/1 crt=62'705 mlcod 0'0 unknown NOTIFY pruub 188.486572266s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:25 np0005592159 python3.9[90404]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 22 08:39:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:25.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:25.923+0000 7f47f8ed4640 -1 osd.2 123 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:25 np0005592159 ceph-osd[79779]: osd.2 123 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:26.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:26 np0005592159 python3.9[90558]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 22 08:39:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:26.940+0000 7f47f8ed4640 -1 osd.2 123 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:26 np0005592159 ceph-osd[79779]: osd.2 123 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:27 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e124 e124: 3 total, 3 up, 3 in
Jan 22 08:39:27 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 124 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=70/71 n=3 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=124 pruub=12.977127075s) [0] r=-1 lpr=124 pi=[70,124)/1 crt=61'686 mlcod 0'0 active pruub 189.279891968s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:27 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 124 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=70/71 n=3 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=124 pruub=12.977048874s) [0] r=-1 lpr=124 pi=[70,124)/1 crt=61'686 mlcod 0'0 unknown NOTIFY pruub 189.279891968s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:27 np0005592159 python3.9[90761]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:39:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e125 e125: 3 total, 3 up, 3 in
Jan 22 08:39:27 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 125 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=70/71 n=3 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=125) [0]/[2] r=0 lpr=125 pi=[70,125)/1 crt=61'686 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:27 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 125 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=70/71 n=3 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=125) [0]/[2] r=0 lpr=125 pi=[70,125)/1 crt=61'686 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:27.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:27.945+0000 7f47f8ed4640 -1 osd.2 125 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:27 np0005592159 ceph-osd[79779]: osd.2 125 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.a deep-scrub starts
Jan 22 08:39:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.a deep-scrub ok
Jan 22 08:39:28 np0005592159 python3.9[90913]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 22 08:39:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:28.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:28.952+0000 7f47f8ed4640 -1 osd.2 125 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:28 np0005592159 ceph-osd[79779]: osd.2 125 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 22 08:39:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 22 08:39:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 22 08:39:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 22 08:39:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e126 e126: 3 total, 3 up, 3 in
Jan 22 08:39:29 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 126 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=125/126 n=3 ec=59/49 lis/c=70/70 les/c/f=71/71/0 sis=125) [0]/[2] async=[0] r=0 lpr=125 pi=[70,125)/1 crt=61'686 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:29 np0005592159 python3.9[91066]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:39:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:29 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 08:39:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:29.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 08:39:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:29.940+0000 7f47f8ed4640 -1 osd.2 126 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:29 np0005592159 ceph-osd[79779]: osd.2 126 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e127 e127: 3 total, 3 up, 3 in
Jan 22 08:39:30 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 127 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=125/126 n=3 ec=59/49 lis/c=125/70 les/c/f=126/71/0 sis=127 pruub=14.943853378s) [0] async=[0] r=-1 lpr=127 pi=[70,127)/1 crt=61'686 mlcod 61'686 active pruub 193.897628784s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:30 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 127 pg[9.1b( v 61'686 (0'0,61'686] local-lis/les=125/126 n=3 ec=59/49 lis/c=125/70 les/c/f=126/71/0 sis=127 pruub=14.943747520s) [0] r=-1 lpr=127 pi=[70,127)/1 crt=61'686 mlcod 0'0 unknown NOTIFY pruub 193.897628784s@ mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 22 08:39:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:30.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 22 08:39:30 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:30.943+0000 7f47f8ed4640 -1 osd.2 127 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:30 np0005592159 ceph-osd[79779]: osd.2 127 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e128 e128: 3 total, 3 up, 3 in
Jan 22 08:39:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:31.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:31 np0005592159 python3.9[91220]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:39:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:31.963+0000 7f47f8ed4640 -1 osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:31 np0005592159 ceph-osd[79779]: osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 22 08:39:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 22 08:39:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:32.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:32 np0005592159 python3.9[91373]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:39:32 np0005592159 python3.9[91451]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:39:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:33.009+0000 7f47f8ed4640 -1 osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:33 np0005592159 ceph-osd[79779]: osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:39:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:33.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:39:33 np0005592159 python3.9[91603]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:39:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 22 08:39:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:34.036+0000 7f47f8ed4640 -1 osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:34 np0005592159 ceph-osd[79779]: osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 22 08:39:34 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:34 np0005592159 python3.9[91681]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:39:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:34.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:35.050+0000 7f47f8ed4640 -1 osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:35 np0005592159 ceph-osd[79779]: osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:35 np0005592159 python3.9[91834]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:39:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:39:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:35.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:39:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:36.004+0000 7f47f8ed4640 -1 osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:36 np0005592159 ceph-osd[79779]: osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.382615) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176382689, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7355, "num_deletes": 256, "total_data_size": 14124223, "memory_usage": 14346272, "flush_reason": "Manual Compaction"}
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176440092, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 8798958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 257, "largest_seqno": 7360, "table_properties": {"data_size": 8768094, "index_size": 20189, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 92051, "raw_average_key_size": 24, "raw_value_size": 8693720, "raw_average_value_size": 2268, "num_data_blocks": 884, "num_entries": 3833, "num_filter_entries": 3833, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088929, "oldest_key_time": 1769088929, "file_creation_time": 1769089176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 57555 microseconds, and 15968 cpu microseconds.
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.440171) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 8798958 bytes OK
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.440195) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.444289) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.444326) EVENT_LOG_v1 {"time_micros": 1769089176444307, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.444348) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 14083390, prev total WAL file size 14083390, number of live WAL files 2.
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.446645) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(8592KB) 8(1648B)]
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176446763, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 8800606, "oldest_snapshot_seqno": -1}
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 3580 keys, 8795175 bytes, temperature: kUnknown
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176503978, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 8795175, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8765003, "index_size": 20142, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8965, "raw_key_size": 87835, "raw_average_key_size": 24, "raw_value_size": 8693778, "raw_average_value_size": 2428, "num_data_blocks": 884, "num_entries": 3580, "num_filter_entries": 3580, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089176, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.504246) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 8795175 bytes
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.505669) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.6 rd, 153.5 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(8.4, 0.0 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3838, records dropped: 258 output_compression: NoCompression
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.505698) EVENT_LOG_v1 {"time_micros": 1769089176505687, "job": 4, "event": "compaction_finished", "compaction_time_micros": 57300, "compaction_time_cpu_micros": 16492, "output_level": 6, "num_output_files": 1, "total_output_size": 8795175, "num_input_records": 3838, "num_output_records": 3580, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176507089, "job": 4, "event": "table_file_deletion", "file_number": 14}
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089176507126, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 22 08:39:36 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:39:36.446505) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:39:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:36.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:36.995+0000 7f47f8ed4640 -1 osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:36 np0005592159 ceph-osd[79779]: osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:37 np0005592159 python3.9[91987]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:39:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:39:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:37.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:39:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:38.028+0000 7f47f8ed4640 -1 osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:38 np0005592159 ceph-osd[79779]: osd.2 128 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 22 08:39:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 22 08:39:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:38.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e129 e129: 3 total, 3 up, 3 in
Jan 22 08:39:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:38 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 22 08:39:38 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:38 np0005592159 python3.9[92140]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 22 08:39:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:39.002+0000 7f47f8ed4640 -1 osd.2 129 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:39 np0005592159 ceph-osd[79779]: osd.2 129 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:39 np0005592159 python3.9[92290]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:39:39 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 22 08:39:39 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:39.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:39.990+0000 7f47f8ed4640 -1 osd.2 129 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:39 np0005592159 ceph-osd[79779]: osd.2 129 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:40.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:40 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 22 08:39:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e130 e130: 3 total, 3 up, 3 in
Jan 22 08:39:40 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 130 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=130) [2] r=0 lpr=130 pi=[98,130)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:40.980+0000 7f47f8ed4640 -1 osd.2 130 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:40 np0005592159 ceph-osd[79779]: osd.2 130 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:41 np0005592159 python3.9[92443]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:39:41 np0005592159 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 22 08:39:41 np0005592159 systemd[1]: tuned.service: Deactivated successfully.
Jan 22 08:39:41 np0005592159 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 22 08:39:41 np0005592159 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 22 08:39:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 22 08:39:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:41.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:41 np0005592159 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 22 08:39:42 np0005592159 ceph-osd[79779]: osd.2 130 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:42.020+0000 7f47f8ed4640 -1 osd.2 130 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 22 08:39:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:42 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e131 e131: 3 total, 3 up, 3 in
Jan 22 08:39:42 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 131 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=131) [2]/[1] r=-1 lpr=131 pi=[98,131)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:42 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 131 pg[9.1d( empty local-lis/les=0/0 n=0 ec=59/49 lis/c=98/98 les/c/f=99/99/0 sis=131) [2]/[1] r=-1 lpr=131 pi=[98,131)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 22 08:39:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:42.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:42 np0005592159 python3.9[92605]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 22 08:39:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:43.055+0000 7f47f8ed4640 -1 osd.2 131 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:43 np0005592159 ceph-osd[79779]: osd.2 131 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 22 08:39:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 22 08:39:43 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:43 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e132 e132: 3 total, 3 up, 3 in
Jan 22 08:39:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:43.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:44.062+0000 7f47f8ed4640 -1 osd.2 132 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:44 np0005592159 ceph-osd[79779]: osd.2 132 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 22 08:39:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e133 e133: 3 total, 3 up, 3 in
Jan 22 08:39:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 133 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=131/98 les/c/f=132/99/0 sis=133) [2] r=0 lpr=133 pi=[98,133)/1 luod=0'0 crt=62'695 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Jan 22 08:39:44 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 133 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=0/0 n=5 ec=59/49 lis/c=131/98 les/c/f=132/99/0 sis=133) [2] r=0 lpr=133 pi=[98,133)/1 crt=62'695 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 22 08:39:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:44.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:45.021+0000 7f47f8ed4640 -1 osd.2 133 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:45 np0005592159 ceph-osd[79779]: osd.2 133 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 22 08:39:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e134 e134: 3 total, 3 up, 3 in
Jan 22 08:39:45 np0005592159 ceph-osd[79779]: osd.2 pg_epoch: 134 pg[9.1d( v 62'695 (0'0,62'695] local-lis/les=133/134 n=5 ec=59/49 lis/c=131/98 les/c/f=132/99/0 sis=133) [2] r=0 lpr=133 pi=[98,133)/1 crt=62'695 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 22 08:39:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:45.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:45.980+0000 7f47f8ed4640 -1 osd.2 134 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:45 np0005592159 ceph-osd[79779]: osd.2 134 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 22 08:39:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 22 08:39:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:39:46 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:46 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 22 08:39:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:46.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e135 e135: 3 total, 3 up, 3 in
Jan 22 08:39:46 np0005592159 python3.9[92759]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:39:47 np0005592159 ceph-osd[79779]: osd.2 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 22 08:39:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:47.001+0000 7f47f8ed4640 -1 osd.2 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 22 08:39:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:47.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:48.026+0000 7f47f8ed4640 -1 osd.2 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:48 np0005592159 ceph-osd[79779]: osd.2 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:48.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:49.032+0000 7f47f8ed4640 -1 osd.2 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:49 np0005592159 ceph-osd[79779]: osd.2 135 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 22 08:39:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 22 08:39:49 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:49 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 22 08:39:49 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e136 e136: 3 total, 3 up, 3 in
Jan 22 08:39:49 np0005592159 python3.9[92964]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:39:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:49.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:50.050+0000 7f47f8ed4640 -1 osd.2 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:50 np0005592159 ceph-osd[79779]: osd.2 136 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 22 08:39:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 22 08:39:50 np0005592159 systemd[1]: session-34.scope: Deactivated successfully.
Jan 22 08:39:50 np0005592159 systemd[1]: session-34.scope: Consumed 1min 11.713s CPU time.
Jan 22 08:39:50 np0005592159 systemd-logind[787]: Session 34 logged out. Waiting for processes to exit.
Jan 22 08:39:50 np0005592159 systemd-logind[787]: Removed session 34.
Jan 22 08:39:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:50.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e137 e137: 3 total, 3 up, 3 in
Jan 22 08:39:50 np0005592159 podman[93162]: 2026-01-22 13:39:50.546956949 +0000 UTC m=+0.755024915 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:39:50 np0005592159 podman[93162]: 2026-01-22 13:39:50.685629727 +0000 UTC m=+0.893697633 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:39:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:51.047+0000 7f47f8ed4640 -1 osd.2 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:51 np0005592159 ceph-osd[79779]: osd.2 137 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:39:51 np0005592159 podman[93319]: 2026-01-22 13:39:51.352014722 +0000 UTC m=+0.058827194 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:39:51 np0005592159 podman[93319]: 2026-01-22 13:39:51.363684213 +0000 UTC m=+0.070496665 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:39:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e138 e138: 3 total, 3 up, 3 in
Jan 22 08:39:51 np0005592159 podman[93386]: 2026-01-22 13:39:51.582608886 +0000 UTC m=+0.065138132 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, com.redhat.component=keepalived-container, release=1793, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, name=keepalived, architecture=x86_64, distribution-scope=public, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2)
Jan 22 08:39:51 np0005592159 podman[93386]: 2026-01-22 13:39:51.596690023 +0000 UTC m=+0.079219249 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, version=2.2.4, io.buildah.version=1.28.2, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, com.redhat.component=keepalived-container, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Jan 22 08:39:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 08:39:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:51.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 08:39:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:52.004+0000 7f47f8ed4640 -1 osd.2 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:52 np0005592159 ceph-osd[79779]: osd.2 138 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:52.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:39:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:39:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 e139: 3 total, 3 up, 3 in
Jan 22 08:39:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:52.955+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:53 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:53.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:53.935+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:54.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:54 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:54.958+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 22 08:39:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 22 08:39:55 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:55.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:55.942+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:55 np0005592159 systemd-logind[787]: New session 35 of user zuul.
Jan 22 08:39:56 np0005592159 systemd[1]: Started Session 35 of User zuul.
Jan 22 08:39:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:39:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:56.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:56.951+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:57 np0005592159 python3.9[93707]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:39:57 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:57.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:57.907+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:58 np0005592159 python3.9[93863]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 22 08:39:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:39:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:39:58.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:39:58 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:39:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:58.885+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:59 np0005592159 python3.9[94017]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:39:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:39:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:39:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:39:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:39:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:39:59.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:39:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:39:59.902+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:39:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:40:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:00.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:40:00 np0005592159 python3.9[94152]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 08:40:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:00 np0005592159 ceph-mon[77081]: Health detail: HEALTH_WARN 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops
Jan 22 08:40:00 np0005592159 ceph-mon[77081]: [WRN] SLOW_OPS: 2 slow ops, oldest one blocked for 188 sec, osd.2 has slow ops
Jan 22 08:40:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:00.931+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 22 08:40:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 22 08:40:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:01 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:01.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:01.889+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:02.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:02 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:02 np0005592159 python3.9[94306]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:02.913+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:03 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:03.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:03.934+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:04.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:04.897+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:05 np0005592159 python3.9[94460]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:40:05 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:05.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 22 08:40:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:05.882+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 22 08:40:06 np0005592159 python3.9[94613]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:06.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:06.904+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 22 08:40:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 22 08:40:07 np0005592159 python3.9[94766]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 22 08:40:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:07.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:07 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:07.911+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:08 np0005592159 python3.9[94966]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:08.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 22 08:40:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:08.902+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 22 08:40:08 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:09 np0005592159 python3.9[95125]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:09.874+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 22 08:40:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:09.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 22 08:40:09 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:10.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:10.899+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:10 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:11.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:11.941+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 22 08:40:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:11 np0005592159 python3.9[95279]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:40:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 22 08:40:12 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:12.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:12.902+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:13 np0005592159 python3.9[95567]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 22 08:40:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:40:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:13.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:40:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:13.901+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:14 np0005592159 python3.9[95718]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:40:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:40:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:14.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:40:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:14.857+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:15 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:15 np0005592159 python3.9[95872]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:15.870+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:15.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:16.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:16.899+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 22 08:40:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 22 08:40:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:17 np0005592159 python3.9[96026]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:17.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:17.932+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:18 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:18 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:18.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:18.922+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:19 np0005592159 python3.9[96182]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:40:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:19.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:19.929+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:20.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:20 np0005592159 python3.9[96337]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 22 08:40:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:20.897+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:21 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:21.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:21.922+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:22 np0005592159 systemd[1]: session-35.scope: Deactivated successfully.
Jan 22 08:40:22 np0005592159 systemd[1]: session-35.scope: Consumed 17.436s CPU time.
Jan 22 08:40:22 np0005592159 systemd-logind[787]: Session 35 logged out. Waiting for processes to exit.
Jan 22 08:40:22 np0005592159 systemd-logind[787]: Removed session 35.
Jan 22 08:40:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:22.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:22.905+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 22 08:40:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 22 08:40:23 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:23.871+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 22 08:40:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:23.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 22 08:40:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:40:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:24.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:40:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:24.913+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:25.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:25.913+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:26.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:26.884+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:27 np0005592159 systemd-logind[787]: New session 36 of user zuul.
Jan 22 08:40:27 np0005592159 systemd[1]: Started Session 36 of User zuul.
Jan 22 08:40:27 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:27.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:27.931+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:28 np0005592159 python3.9[96569]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:28.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:28 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:28 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:28.907+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:29 np0005592159 python3.9[96726]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:40:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:29.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:29.931+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:30 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:30.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:30 np0005592159 python3.9[96920]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:40:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:30.943+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:31 np0005592159 systemd[1]: session-36.scope: Deactivated successfully.
Jan 22 08:40:31 np0005592159 systemd[1]: session-36.scope: Consumed 2.145s CPU time.
Jan 22 08:40:31 np0005592159 systemd-logind[787]: Session 36 logged out. Waiting for processes to exit.
Jan 22 08:40:31 np0005592159 systemd-logind[787]: Removed session 36.
Jan 22 08:40:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:31.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:31.974+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:32.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:32.941+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:33.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:33.960+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:34.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:34 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:34.979+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:35.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:35.967+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:36.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:36.994+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:37 np0005592159 systemd-logind[787]: New session 37 of user zuul.
Jan 22 08:40:37 np0005592159 systemd[1]: Started Session 37 of User zuul.
Jan 22 08:40:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:37.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:37.951+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:37 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:38 np0005592159 python3.9[97103]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:38.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:38.974+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:39 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:39 np0005592159 python3.9[97258]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:39.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:39.961+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:40.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:40 np0005592159 python3.9[97415]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:40:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:40.999+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:41 np0005592159 python3.9[97499]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:41.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:42.038+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:42.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:43.017+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:43 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:43 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:43 np0005592159 python3.9[97653]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:40:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:43.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:44.029+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:44.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:45 np0005592159 python3.9[97849]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:40:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:45.072+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:45 np0005592159 python3.9[98001]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:40:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:45.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:46.095+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:46.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:46 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:47 np0005592159 python3.9[98167]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:40:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:47.141+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:47 np0005592159 python3.9[98245]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:40:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:47.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:48.151+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:48 np0005592159 python3.9[98448]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:40:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:48.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:48 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:48 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:49 np0005592159 python3.9[98526]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:49.146+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:49 np0005592159 python3.9[98678]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:49.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:49 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:50.145+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:50 np0005592159 python3.9[98830]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:50.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:51 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:51 np0005592159 python3.9[98983]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:51.168+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:51 np0005592159 python3.9[99135]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:40:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:51.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:52.164+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:52.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:53 np0005592159 python3.9[99288]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:40:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:53.167+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:53.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:54 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:54.195+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:54.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:55 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:55.234+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:55 np0005592159 python3.9[99442]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:40:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:55.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:56.267+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:56 np0005592159 python3.9[99596]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:40:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:40:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:56.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:57.228+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:57 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:57 np0005592159 python3.9[99749]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:40:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:57.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:40:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:58.184+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:58 np0005592159 python3.9[99901]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:40:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:58 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:40:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:40:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:40:58.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:40:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:40:59.199+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:40:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:40:59 np0005592159 python3.9[100055]: ansible-service_facts Invoked
Jan 22 08:40:59 np0005592159 network[100072]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:40:59 np0005592159 network[100073]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:40:59 np0005592159 network[100074]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:40:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:40:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:40:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:40:59.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:00.216+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:00.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:00 np0005592159 podman[100280]: 2026-01-22 13:41:00.723071625 +0000 UTC m=+0.064043926 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:41:00 np0005592159 podman[100280]: 2026-01-22 13:41:00.829651574 +0000 UTC m=+0.170623875 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Jan 22 08:41:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:01.198+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:01 np0005592159 podman[100462]: 2026-01-22 13:41:01.358274778 +0000 UTC m=+0.045658041 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:41:01 np0005592159 podman[100462]: 2026-01-22 13:41:01.393716662 +0000 UTC m=+0.081099895 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:41:01 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:01 np0005592159 podman[100542]: 2026-01-22 13:41:01.576661148 +0000 UTC m=+0.051435616 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, io.buildah.version=1.28.2, vcs-type=git, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 08:41:01 np0005592159 podman[100542]: 2026-01-22 13:41:01.588692982 +0000 UTC m=+0.063467470 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, io.openshift.tags=Ceph keepalived, release=1793, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, build-date=2023-02-22T09:23:20)
Jan 22 08:41:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:01.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:02.238+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:41:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:02.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:41:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:03.229+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:03 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:03 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:41:03 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:41:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:03.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:04.190+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:04.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:05 np0005592159 python3.9[101066]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:41:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:05.205+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:05 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:05.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:06.219+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:06.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:07.188+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:07 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:41:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:07.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:41:08 np0005592159 python3.9[101220]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 22 08:41:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:08.157+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:08.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:08 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:09.206+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:09 np0005592159 python3.9[101423]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:09 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:09 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:09 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:41:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:09.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:09 np0005592159 python3.9[101501]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:10.172+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:10.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:10 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:11 np0005592159 python3.9[101704]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:11.144+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:11 np0005592159 python3.9[101782]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:11 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:11.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:12.144+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:12.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:12 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:13.112+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:13 np0005592159 python3.9[101935]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:13.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:14.146+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:14.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:15.132+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:15 np0005592159 python3.9[102088]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:41:15 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:15.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:16.163+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:16 np0005592159 python3.9[102172]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:41:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:16.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:17.175+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:17 np0005592159 systemd[1]: session-37.scope: Deactivated successfully.
Jan 22 08:41:17 np0005592159 systemd[1]: session-37.scope: Consumed 23.312s CPU time.
Jan 22 08:41:17 np0005592159 systemd-logind[787]: Session 37 logged out. Waiting for processes to exit.
Jan 22 08:41:17 np0005592159 systemd-logind[787]: Removed session 37.
Jan 22 08:41:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:17.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:18.168+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:18.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:18 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:18 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:19.188+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:19.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:20.203+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:20.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:21.244+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:21.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:22.209+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:22.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:22 np0005592159 systemd-logind[787]: New session 38 of user zuul.
Jan 22 08:41:22 np0005592159 systemd[1]: Started Session 38 of User zuul.
Jan 22 08:41:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:23.237+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:23 np0005592159 python3.9[102358]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:23 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:23.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:24.237+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:24 np0005592159 python3.9[102511]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:24.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:24 np0005592159 python3.9[102589]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:25.285+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:25 np0005592159 systemd[1]: session-38.scope: Deactivated successfully.
Jan 22 08:41:25 np0005592159 systemd[1]: session-38.scope: Consumed 1.285s CPU time.
Jan 22 08:41:25 np0005592159 systemd-logind[787]: Session 38 logged out. Waiting for processes to exit.
Jan 22 08:41:25 np0005592159 systemd-logind[787]: Removed session 38.
Jan 22 08:41:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:25.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:26.305+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:26.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:27.298+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:27.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:28 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:28.311+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:28.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:29.327+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:29 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:29.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:30.326+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:30 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:30 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:30.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:31 np0005592159 systemd-logind[787]: New session 39 of user zuul.
Jan 22 08:41:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:31.331+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:31 np0005592159 systemd[1]: Started Session 39 of User zuul.
Jan 22 08:41:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:31.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:32.379+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:32 np0005592159 python3.9[102821]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:41:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:32.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:33.346+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:33 np0005592159 python3.9[102978]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:34.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:34.353+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:34 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:34.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:34 np0005592159 python3.9[103154]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:35 np0005592159 python3.9[103232]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.73yt8_6b recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:35.306+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:36.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:36.257+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:36.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:36 np0005592159 python3.9[103385]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:37.259+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:37 np0005592159 python3.9[103463]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.6j1ftkho recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:37 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:41:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:38.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:41:38 np0005592159 python3.9[103615]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:41:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:38.245+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:38.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:38 np0005592159 python3.9[103768]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:39.204+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:39 np0005592159 python3.9[103846]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:41:40 np0005592159 python3.9[103998]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:40.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:40.179+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:40 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:41:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:40.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:41:40 np0005592159 python3.9[104077]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:41:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:41.211+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:41 np0005592159 python3.9[104229]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:42.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:42.188+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:42 np0005592159 python3.9[104381]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:42.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:42 np0005592159 python3.9[104460]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:43.236+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:43 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:43 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:43 np0005592159 python3.9[104612]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:44.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:44 np0005592159 python3.9[104690]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:44.235+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:44.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:45.189+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:45 np0005592159 python3.9[104843]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:41:45 np0005592159 systemd[1]: Reloading.
Jan 22 08:41:45 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:41:45 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:41:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:41:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:46.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:41:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:46.210+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:46 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:46.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:46 np0005592159 python3.9[105033]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:47.231+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:47 np0005592159 python3.9[105111]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:48.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:48 np0005592159 python3.9[105263]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:48.205+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:48.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:48 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:48 np0005592159 python3.9[105392]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:49.226+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:49 np0005592159 python3.9[105544]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:41:49 np0005592159 systemd[1]: Reloading.
Jan 22 08:41:49 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:49 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:49 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:41:49 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:41:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:50.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:50 np0005592159 systemd[1]: Starting Create netns directory...
Jan 22 08:41:50 np0005592159 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 08:41:50 np0005592159 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 08:41:50 np0005592159 systemd[1]: Finished Create netns directory.
Jan 22 08:41:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:50.219+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:50.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:51 np0005592159 python3.9[105737]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:41:51 np0005592159 network[105754]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:41:51 np0005592159 network[105755]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:41:51 np0005592159 network[105756]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:41:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:51.202+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:51 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:41:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:52.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:41:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:52.199+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:41:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:52.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:41:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:53.208+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:54.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:54.183+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:54.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:54 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:55.205+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:56.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:56.193+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:41:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:56.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:57.154+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:57 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:58 np0005592159 python3.9[106021]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:41:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:41:58.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:58.162+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:58 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:41:58 np0005592159 python3.9[106099]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:41:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:41:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:41:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:41:58.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:41:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:41:59.135+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:41:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:41:59 np0005592159 python3.9[106252]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:00.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:00.138+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:00 np0005592159 python3.9[106405]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:42:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:00.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:42:00 np0005592159 python3.9[106483]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:01.161+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:01 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:02.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:02.150+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:02 np0005592159 python3.9[106635]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 22 08:42:02 np0005592159 systemd[1]: Starting Time & Date Service...
Jan 22 08:42:02 np0005592159 systemd[1]: Started Time & Date Service.
Jan 22 08:42:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:02.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:03.159+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:03 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:03 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:03 np0005592159 python3.9[106792]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:04.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:04.174+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:04.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:04 np0005592159 python3.9[106945]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:05.221+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:05 np0005592159 python3.9[107023]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:05 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:06.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:06 np0005592159 python3.9[107175]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:06.175+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:06 np0005592159 python3.9[107254]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yknvatdw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:06.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:07.133+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:07 np0005592159 python3.9[107406]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:07 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:08 np0005592159 python3.9[107484]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:08.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:08.092+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:42:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:08.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:42:08 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:09 np0005592159 python3.9[107687]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:42:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:09.092+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:09 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:10.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:10.072+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:10 np0005592159 python3[107840]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 08:42:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:42:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:10.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:42:10 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:10 np0005592159 podman[108114]: 2026-01-22 13:42:10.914693678 +0000 UTC m=+0.056824532 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 08:42:11 np0005592159 podman[108114]: 2026-01-22 13:42:11.019493483 +0000 UTC m=+0.161624337 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 08:42:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:11.084+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:11 np0005592159 python3.9[108185]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:11 np0005592159 python3.9[108367]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:11 np0005592159 podman[108400]: 2026-01-22 13:42:11.733367799 +0000 UTC m=+0.061851576 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:42:11 np0005592159 podman[108400]: 2026-01-22 13:42:11.742449412 +0000 UTC m=+0.070933189 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:42:11 np0005592159 podman[108493]: 2026-01-22 13:42:11.988013993 +0000 UTC m=+0.059671208 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, description=keepalived for Ceph, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., distribution-scope=public, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20)
Jan 22 08:42:12 np0005592159 podman[108493]: 2026-01-22 13:42:12.001689569 +0000 UTC m=+0.073346774 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, distribution-scope=public, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 08:42:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:12.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:12.073+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:12 np0005592159 python3.9[108731]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:12.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:13.044+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:13 np0005592159 python3.9[108913]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089332.0290365-903-133623386870668/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:14.002+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:14.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:42:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:42:14 np0005592159 python3.9[109066]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:14.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:15.044+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:15 np0005592159 python3.9[109144]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:15 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:16.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:16.088+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:16 np0005592159 python3.9[109296]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.388688) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336388737, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 2593, "num_deletes": 251, "total_data_size": 5257530, "memory_usage": 5338304, "flush_reason": "Manual Compaction"}
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336419632, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 3384523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7365, "largest_seqno": 9953, "table_properties": {"data_size": 3374668, "index_size": 5773, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3013, "raw_key_size": 25170, "raw_average_key_size": 21, "raw_value_size": 3352581, "raw_average_value_size": 2826, "num_data_blocks": 255, "num_entries": 1186, "num_filter_entries": 1186, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089177, "oldest_key_time": 1769089177, "file_creation_time": 1769089336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 30999 microseconds, and 7093 cpu microseconds.
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.419690) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 3384523 bytes OK
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.419715) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.421550) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.421569) EVENT_LOG_v1 {"time_micros": 1769089336421564, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.421591) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 5245572, prev total WAL file size 5245572, number of live WAL files 2.
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.422938) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(3305KB)], [15(8589KB)]
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336423028, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 12179698, "oldest_snapshot_seqno": -1}
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 4243 keys, 10523668 bytes, temperature: kUnknown
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336501515, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 10523668, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10489352, "index_size": 22622, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10629, "raw_key_size": 103721, "raw_average_key_size": 24, "raw_value_size": 10406610, "raw_average_value_size": 2452, "num_data_blocks": 980, "num_entries": 4243, "num_filter_entries": 4243, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.501812) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 10523668 bytes
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.503353) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.0 rd, 133.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.4 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 4766, records dropped: 523 output_compression: NoCompression
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.503371) EVENT_LOG_v1 {"time_micros": 1769089336503362, "job": 6, "event": "compaction_finished", "compaction_time_micros": 78575, "compaction_time_cpu_micros": 24274, "output_level": 6, "num_output_files": 1, "total_output_size": 10523668, "num_input_records": 4766, "num_output_records": 4243, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336504150, "job": 6, "event": "table_file_deletion", "file_number": 17}
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089336505814, "job": 6, "event": "table_file_deletion", "file_number": 15}
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.422810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505899) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:16.505900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:16.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:16 np0005592159 python3.9[109375]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:17.122+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:17 np0005592159 python3.9[109527]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:18.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:18.079+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:18 np0005592159 python3.9[109605]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:18.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:18 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:18 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:18 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:19.062+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:19 np0005592159 python3.9[109758]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:42:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:20.020+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:20.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:20 np0005592159 python3.9[109913]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:20.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:21.049+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:21 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:42:21 np0005592159 python3.9[110091]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:21 np0005592159 python3.9[110268]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:22.048+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:42:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:22.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:22.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.777916) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342778025, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 339, "num_deletes": 250, "total_data_size": 247438, "memory_usage": 253504, "flush_reason": "Manual Compaction"}
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342781432, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 162793, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9958, "largest_seqno": 10292, "table_properties": {"data_size": 160615, "index_size": 342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5684, "raw_average_key_size": 19, "raw_value_size": 156292, "raw_average_value_size": 533, "num_data_blocks": 14, "num_entries": 293, "num_filter_entries": 293, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089337, "oldest_key_time": 1769089337, "file_creation_time": 1769089342, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 3541 microseconds, and 1023 cpu microseconds.
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.781461) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 162793 bytes OK
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.781476) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.783252) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.783266) EVENT_LOG_v1 {"time_micros": 1769089342783262, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.783280) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 245067, prev total WAL file size 245067, number of live WAL files 2.
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.783591) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(158KB)], [18(10MB)]
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342783620, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 10686461, "oldest_snapshot_seqno": -1}
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 4025 keys, 7891879 bytes, temperature: kUnknown
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342843243, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 7891879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7862578, "index_size": 18119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 99768, "raw_average_key_size": 24, "raw_value_size": 7787090, "raw_average_value_size": 1934, "num_data_blocks": 782, "num_entries": 4025, "num_filter_entries": 4025, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089342, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.843672) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 7891879 bytes
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.845719) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 178.8 rd, 132.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 10.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(114.1) write-amplify(48.5) OK, records in: 4536, records dropped: 511 output_compression: NoCompression
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.845767) EVENT_LOG_v1 {"time_micros": 1769089342845745, "job": 8, "event": "compaction_finished", "compaction_time_micros": 59770, "compaction_time_cpu_micros": 17627, "output_level": 6, "num_output_files": 1, "total_output_size": 7891879, "num_input_records": 4536, "num_output_records": 4025, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342846210, "job": 8, "event": "table_file_deletion", "file_number": 20}
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089342850517, "job": 8, "event": "table_file_deletion", "file_number": 18}
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.783536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.850605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.850611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.850612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.850613) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:42:22.850615) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:42:22 np0005592159 python3.9[110421]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 08:42:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:23.030+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:23 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:23 np0005592159 python3.9[110573]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 22 08:42:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:24.032+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:24.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:24.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:24 np0005592159 systemd[1]: session-39.scope: Deactivated successfully.
Jan 22 08:42:24 np0005592159 systemd[1]: session-39.scope: Consumed 29.166s CPU time.
Jan 22 08:42:24 np0005592159 systemd-logind[787]: Session 39 logged out. Waiting for processes to exit.
Jan 22 08:42:24 np0005592159 systemd-logind[787]: Removed session 39.
Jan 22 08:42:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:25.068+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:26.071+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:26.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:26.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:27.076+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:27 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:28.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:28.082+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:28.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:28 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:28 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:29.062+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:30.060+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:30.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:30 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:30.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:31.038+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:32.013+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:32.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:32 np0005592159 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 22 08:42:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:32.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:33.012+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:33 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:34.048+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:34.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:34 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:34.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:35.037+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:35 np0005592159 systemd-logind[787]: New session 40 of user zuul.
Jan 22 08:42:35 np0005592159 systemd[1]: Started Session 40 of User zuul.
Jan 22 08:42:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:36.045+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:36.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:36.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:37.054+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:37 np0005592159 python3.9[110814]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 22 08:42:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:38.042+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:38.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:38.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:38 np0005592159 python3.9[110967]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:42:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:38.994+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:39 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:39 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:39.987+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:40.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:42:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:40.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:42:40 np0005592159 python3.9[111122]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 22 08:42:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:40.959+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:40 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:41 np0005592159 python3.9[111274]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.rtp1qndu follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:42:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:41.973+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:42.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:42 np0005592159 python3.9[111399]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.rtp1qndu mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089361.1670475-110-213754723382366/.source.rtp1qndu _original_basename=.8r04hswq follow=False checksum=9893b3bde8503c371031e4467aece9772279f87c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:42.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:42.938+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:43 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:43 np0005592159 python3.9[111552]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:42:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:43.952+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:44.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:44.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:44 np0005592159 python3.9[111706]: ansible-ansible.builtin.blockinfile Invoked with block=compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC2ocldELG9EA3TbFx5afl1mbwf9X+3Gzx1pKWvAq8+0s5gE2NeAD23paYiiaQ+/r8QE6CHtXOoy/H9FGAGU3oxMrZnEX7nslelo1+Q7jWdE7ILrzUhQpkJeXJNMrA3p7aBbMxEqMXO9Ydl3Cu0CA+jItIQW1oTWLvS+BsWbES09z++jcPgu6HJu1lFXD9GgU53AfhpFcnhuxK8AnNyG1iy1Zus5Xi2NlME94THioW0/1Ek8Pl/PbSdpaErM1lgrZ7Yl/MdCelTNQI4tQrJebtNynEMhrYTBwbruS6YIia/ZSxDJZWt9bg1dpkd24KSpr4hz5kDn4sCFHyPV/JMYmuvTwFByBXc92tBbYeQU5KMBP8OFjlzfm1uAfnM1BOyrPOy7E5RFig010mTP/VruBFb/T+3Z9DqjZCkGagdrKrV80AwqnAsn/mMG/tHarrHLr8BRX1UIFUz2qfFaBpSkmeQ6u3ERLQyvJIjXaXjvvmQVDRQxd8P5HWM57joMC2P+c8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFTUVWfsHbDnQr7ZM9BkSRv9ghRtTlzwZgmDm9W4jCII#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGjBy4pT9xvRinN5D7FG54iZjTb5U7Le6fRnUKrD4anfJZQ1Vd0mJxikxxi0T2VsVngeW+U82a0S7cK3UeWIL9s=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCz1S+AyqG+uG2QcnBxDRKRCSQ1ADb7AX9YKwfPf8jy0Q8YD3aJm/CVexcMyR1BQUaGjRFoZkm/O4ekVQ36cOQ2M7HRv78pGNm0BGtfNeFeRB5w5+RSPgj1rY9joGiRIZoyVVlz9uuM9NTlYiNC/X5gLWfreUbCGl6lDKkxGdOjUnjuZ2djcx48WXZurkkcjd9j3WCQl899CDpx6elTEEZaV3/mbpfEtOtTXEFfoq1Z1XSjngnkZMARqt+JIN02f6kgEgWNSRAJxqYbFz1jtY43UJ/C2mO29LedfXOW3dpKCC6QHdPDSQJp2Jrf0izl52jvmpDvr6wWY9PW9AmMyxh1gSuP1a/uteKBBf7vlxtpYJWDSivQxPZw3RbBZuhspxefEOUXkwGNycW/+rPGFZRrAVYWLTZ6dLn0aviyE1+ZEDIMJop1CohPOhvJxJ7s1ulnjvVDc7kLhmBewXbeY3Lp6SoMUK8ziKHsTr2Y/RfK8d7LXmARc7+O9VWI4VVV8U=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIArjsNRQko0Q06DDAhSCoRYTLidRzR9vGa18TMghIrTh#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBDfBKVIdWmS1D3kNVJYnvsERskkDp7/TXgEseqOABxcNISULCvy6hWTcKYjXdFK5Yrl53dvxfzzAGTPPln3an4=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDARChhswCxxjhho4qSL0BKXUq4AvMW1MDxy3K15MpkFlnctOqsuulAZum+3JFif15RegZjzUC7sGyhSLoFUnXimQHlJIlaGg+Vr+vh23ujuk8uWbwf6q8CF03tz4edapNjNQ+SCuGRJkINMaGGTzgBwoStqctW97kU0Z+A4cqgyMG8V8ZvSG7it0puvEOIYw5rtCA7Svueoxb5UMO33HTJbIuILYxnfEyUIHSsziJHGhRFJJ7PcNH3B4Ogew4pg31GaTi9pIHKHt/YE6WKj7P7HxpTVvgBsI27Pveo4PPkH4yCwjZlntIAvJhn+6czWlsTsmf+EUSf+u1mst9EmzJ/BztwNxcUjlAkf1E3UzoEKB70ShX+201s+/Z9VrHZj4Ku7Ptht9N5F8J01j2+qYCnmeLK9AWqkanEZy5N+hICP1XbFk3IlKyUW4Km0CXwZmXlvdC5Juyt74uJfeiNcsarU75daE2Zx4+j76+JtN8BKgrIAzEcyLOLCOxspAtxGB8=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILuPMhHnuBKJH3E1cndLaLMVE35g920qreV5wjp7kiGA#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMjB1VLvlmcfY82jQpLEcCHkJB16T8jGBBdZAl8DHhdWgqjciDgZx2zOlmbn8OtO4dCPZsLT8VomlJYVqIcvuZ4=#012 create=True mode=0644 path=/tmp/ansible.rtp1qndu state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:44.988+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:45.942+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:45 np0005592159 python3.9[111859]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.rtp1qndu' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:42:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:46.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:46.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:46 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:46.915+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:46 np0005592159 python3.9[112014]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.rtp1qndu state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:42:47 np0005592159 systemd[1]: session-40.scope: Deactivated successfully.
Jan 22 08:42:47 np0005592159 systemd[1]: session-40.scope: Consumed 4.887s CPU time.
Jan 22 08:42:47 np0005592159 systemd-logind[787]: Session 40 logged out. Waiting for processes to exit.
Jan 22 08:42:47 np0005592159 systemd-logind[787]: Removed session 40.
Jan 22 08:42:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:47 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:47.924+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:48.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:48.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:48.920+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:48 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:49.904+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:50.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:50.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:50.857+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:51 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:51.846+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:52.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:52.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:52 np0005592159 systemd-logind[787]: New session 41 of user zuul.
Jan 22 08:42:52 np0005592159 systemd[1]: Started Session 41 of User zuul.
Jan 22 08:42:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:52.892+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:53.854+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:53 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:53 np0005592159 python3.9[112245]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:42:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:54.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:54.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:54.837+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:55 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:55 np0005592159 python3.9[112403]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 08:42:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:55.834+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:56.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:42:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:56.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:56 np0005592159 python3.9[112558]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:42:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:56.824+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:57 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:57.872+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:57 np0005592159 python3.9[112711]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:42:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:58 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 364 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:42:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:42:58.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:42:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:42:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:42:58.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:42:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:58.921+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:58 np0005592159 python3.9[112865]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:42:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:42:59.907+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:42:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:42:59 np0005592159 python3.9[113017]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:00.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:00 np0005592159 systemd[1]: session-41.scope: Deactivated successfully.
Jan 22 08:43:00 np0005592159 systemd[1]: session-41.scope: Consumed 3.752s CPU time.
Jan 22 08:43:00 np0005592159 systemd-logind[787]: Session 41 logged out. Waiting for processes to exit.
Jan 22 08:43:00 np0005592159 systemd-logind[787]: Removed session 41.
Jan 22 08:43:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:00.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:00.933+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:01 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:01 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:01.964+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:02.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:02.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:02 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:02.932+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:03.972+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:04.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:04.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:04.993+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:05 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:06.007+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:06 np0005592159 systemd-logind[787]: New session 42 of user zuul.
Jan 22 08:43:06 np0005592159 systemd[1]: Started Session 42 of User zuul.
Jan 22 08:43:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:06.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:06.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:07.030+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:07 np0005592159 python3.9[113199]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:43:07 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:07.980+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:08.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:08 np0005592159 python3.9[113355]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:43:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:08.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:09.007+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:09 np0005592159 python3.9[113470]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 22 08:43:09 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:09 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:10.000+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:10.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:10.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:10.983+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:11 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:11 np0005592159 python3.9[113642]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:43:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:11.974+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:12 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:12.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:12.996+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:13 np0005592159 python3.9[113794]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:43:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:13 np0005592159 python3.9[113944]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:43:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:14.001+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:14.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:14 np0005592159 python3.9[114095]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:43:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:14.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:14.974+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:15 np0005592159 systemd[1]: session-42.scope: Deactivated successfully.
Jan 22 08:43:15 np0005592159 systemd[1]: session-42.scope: Consumed 5.624s CPU time.
Jan 22 08:43:15 np0005592159 systemd-logind[787]: Session 42 logged out. Waiting for processes to exit.
Jan 22 08:43:15 np0005592159 systemd-logind[787]: Removed session 42.
Jan 22 08:43:15 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:15.940+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:16.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:16.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:16.963+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:17.932+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:18.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:18.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:18 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:18 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:18.913+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:19.888+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:20.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:20.890+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:21.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:21 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:21 np0005592159 systemd-logind[787]: New session 43 of user zuul.
Jan 22 08:43:21 np0005592159 systemd[1]: Started Session 43 of User zuul.
Jan 22 08:43:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:21.889+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:21 np0005592159 podman[114397]: 2026-01-22 13:43:21.931017425 +0000 UTC m=+0.064434184 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:43:22 np0005592159 podman[114397]: 2026-01-22 13:43:22.022962257 +0000 UTC m=+0.156379016 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 08:43:22 np0005592159 python3.9[114468]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:43:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:22 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:22 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:22 np0005592159 podman[114633]: 2026-01-22 13:43:22.736464242 +0000 UTC m=+0.177206643 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:43:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:22.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:22 np0005592159 podman[114633]: 2026-01-22 13:43:22.747680337 +0000 UTC m=+0.188422708 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:43:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:22.843+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:22 np0005592159 podman[114699]: 2026-01-22 13:43:22.939656601 +0000 UTC m=+0.051402610 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, description=keepalived for Ceph, release=1793, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived)
Jan 22 08:43:22 np0005592159 podman[114699]: 2026-01-22 13:43:22.98371541 +0000 UTC m=+0.095461439 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.openshift.expose-services=, name=keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vcs-type=git)
Jan 22 08:43:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:23.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:23 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:23 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:23 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:23.843+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:24 np0005592159 python3.9[114978]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:24 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:43:24 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:24 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:43:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:24.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:24.796+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:24 np0005592159 python3.9[115143]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:25.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:25 np0005592159 python3.9[115295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:25.831+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:26 np0005592159 python3.9[115418]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089405.023385-154-59949951216217/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=63b51bd5f8f7b1595ccb625079ef1c0e74a34cd4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:26.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:26.837+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:27 np0005592159 python3.9[115571]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:27.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:27 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:27.845+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:27 np0005592159 python3.9[115694]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089406.7464411-154-56265308602244/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=cc1c70588824ebebf3437effcc8b7daf397d0332 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:28 np0005592159 python3.9[115847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:28.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:28.833+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:28 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:29 np0005592159 python3.9[115970]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089408.0385644-154-185353800573225/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=c446a79c9e0c2c4e1866f2c8d564bd6e393bc473 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:29.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:29 np0005592159 python3.9[116172]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:29.818+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:29 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 399 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:30 np0005592159 python3.9[116324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:30.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:30.847+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:30 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:30 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:30 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:43:31 np0005592159 python3.9[116477]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:31.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:31 np0005592159 python3.9[116650]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089410.6217-336-11513084826455/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=d7ceac7a2a3de5d60ce6109627fc28aa85299752 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:31.827+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:32 np0005592159 python3.9[116802]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:32 np0005592159 python3.9[116926]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089411.7551444-336-172225582209765/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=9db852ea1063f3b3372c70e7b1ec0fee5b9f16e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:32.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:32.838+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:33.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:33 np0005592159 python3.9[117078]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:33.858+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:33 np0005592159 python3.9[117201]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089412.874086-336-206277195598569/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=dd5d85a06a624929f5f6a9d093c91f37f447db74 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:34 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:34 np0005592159 python3.9[117354]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:34.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:34.836+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:35.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:35 np0005592159 python3.9[117506]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:35.795+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:36 np0005592159 python3.9[117658]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:36.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:36 np0005592159 python3.9[117782]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089415.6172404-508-169552344552263/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=064b6b2de03bd1b3c0ee9a7de3a1cc7f54c2c8c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:36.828+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:37.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:37 np0005592159 python3.9[117934]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:37.853+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:37 np0005592159 python3.9[118057]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089416.9563866-508-230053014081636/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=9db852ea1063f3b3372c70e7b1ec0fee5b9f16e1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:38 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 404 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:38 np0005592159 python3.9[118210]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:43:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:38.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:43:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:38.888+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:39 np0005592159 python3.9[118333]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089418.049029-508-56370431584138/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=0bd5fdf5b338410f4386fce1270ddc78cda35238 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:39.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:39 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:39.885+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:40 np0005592159 python3.9[118485]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:40.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:40 np0005592159 python3.9[118638]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:40.912+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:40 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:41.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:41 np0005592159 python3.9[118761]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089420.4309561-705-40875315371661/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:41.930+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:42 np0005592159 python3.9[118913]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:42 np0005592159 python3.9[119066]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:42.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:42.887+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:43 np0005592159 python3.9[119189]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089422.2060344-770-60931968141020/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:43.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:43 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:43 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:43 np0005592159 python3.9[119341]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:43.916+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:44 np0005592159 python3.9[119493]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:44.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:44.910+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:44 np0005592159 python3.9[119617]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089424.0723221-838-1354098801245/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:45.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:45 np0005592159 python3.9[119769]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:45.880+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:46 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:46 np0005592159 python3.9[119921]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:46.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:46 np0005592159 python3.9[120045]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089425.8454711-902-51465542634224/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:46.862+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:46 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:47.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:47 np0005592159 python3.9[120197]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:47.846+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:48 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:48 np0005592159 python3.9[120349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:48 np0005592159 python3.9[120473]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089427.6470435-965-141793958172935/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:48.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:48.895+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:49 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:49 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 419 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:49.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:49 np0005592159 python3.9[120625]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:43:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:49.853+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:50 np0005592159 python3.9[120827]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:43:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:50.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:50 np0005592159 python3.9[120951]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089429.617371-1032-76471749655435/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=c4f4c98657a71a0b13d9544ea5406adecfa4896c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:43:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:50.807+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:51 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:51.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:51.848+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:52.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:52.825+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:53.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:53.871+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:54 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:54 np0005592159 systemd[1]: session-43.scope: Deactivated successfully.
Jan 22 08:43:54 np0005592159 systemd[1]: session-43.scope: Consumed 21.706s CPU time.
Jan 22 08:43:54 np0005592159 systemd-logind[787]: Session 43 logged out. Waiting for processes to exit.
Jan 22 08:43:54 np0005592159 systemd-logind[787]: Removed session 43.
Jan 22 08:43:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:54.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:54.899+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:55 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:55.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:55.849+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:43:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:56.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:56.889+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:57.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:57 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:57.847+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:58 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 424 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:43:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:43:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:43:58.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:43:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:58.843+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:43:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:43:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:43:59.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:43:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:43:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:43:59.866+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:43:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:00 np0005592159 systemd-logind[787]: New session 44 of user zuul.
Jan 22 08:44:00 np0005592159 systemd[1]: Started Session 44 of User zuul.
Jan 22 08:44:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:00.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:00.842+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:01 np0005592159 python3.9[121136]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:01.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:01 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:01.795+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:01 np0005592159 python3.9[121288]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:02 np0005592159 python3.9[121412]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089441.3151555-64-172847734527370/.source.conf _original_basename=ceph.conf follow=False checksum=c3a8ec6ec08fd3904e44a403280c0742b2934d96 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:02.760+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:02 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 434 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:02.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:03 np0005592159 python3.9[121564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:03.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:03.759+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:03 np0005592159 python3.9[121687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089442.8313508-64-250243575326191/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=8d4a0ad3eb7bcba9ed45036c12ef9de6a4ee9832 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:03 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:04 np0005592159 systemd[1]: session-44.scope: Deactivated successfully.
Jan 22 08:44:04 np0005592159 systemd[1]: session-44.scope: Consumed 2.534s CPU time.
Jan 22 08:44:04 np0005592159 systemd-logind[787]: Session 44 logged out. Waiting for processes to exit.
Jan 22 08:44:04 np0005592159 systemd-logind[787]: Removed session 44.
Jan 22 08:44:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:04.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:04.784+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:05.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:05.761+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:05 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:44:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:06.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:44:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:06.810+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:07 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:07.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:07.833+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:08.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:08.795+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:09 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:09 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 439 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:09.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:09.840+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:09 np0005592159 systemd-logind[787]: New session 45 of user zuul.
Jan 22 08:44:09 np0005592159 systemd[1]: Started Session 45 of User zuul.
Jan 22 08:44:10 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:10.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:10.857+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:10 np0005592159 python3.9[121919]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:44:11 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:11.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:11.836+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:12 np0005592159 python3.9[122075]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:12 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:12.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:12.865+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:12 np0005592159 python3.9[122228]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:13.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:13.892+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:13 np0005592159 python3.9[122378]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:44:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:14.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:14 np0005592159 python3.9[122531]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 22 08:44:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:14.920+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:15.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:15 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:15.903+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:16.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:16.917+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:17 np0005592159 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 22 08:44:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:44:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:17.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:44:17 np0005592159 python3.9[122688]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:44:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:17.872+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:18 np0005592159 python3.9[122772]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:44:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:18.910+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:18.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:19 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 444 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:19.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:19.866+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:20.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:20.849+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:20 np0005592159 python3.9[122927]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:44:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:21.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:21 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:21.858+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:21 np0005592159 python3[123082]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 22 08:44:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:22.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:22.874+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:22 np0005592159 python3.9[123235]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:23.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:23 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 454 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:23 np0005592159 python3.9[123387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:23.886+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:24 np0005592159 python3.9[123465]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:24.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:24.927+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:25 np0005592159 python3.9[123618]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:25.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:25 np0005592159 python3.9[123696]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.yoyo3fgj recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:25.914+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:26 np0005592159 python3.9[123848]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:26.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:26 np0005592159 python3.9[123927]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:26.919+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:27.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:27 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:27.935+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:28 np0005592159 python3.9[124079]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:28.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:28 np0005592159 python3[124233]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 08:44:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:28.939+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:29 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:29.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:29.925+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:29 np0005592159 python3.9[124408]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:30 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:30 np0005592159 python3.9[124561]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089469.4669268-433-970236791258/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:30.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:30.890+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:31.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:31 np0005592159 python3.9[124813]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:31 np0005592159 podman[124913]: 2026-01-22 13:44:31.883812827 +0000 UTC m=+0.061717124 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 08:44:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:31.890+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:32 np0005592159 podman[124913]: 2026-01-22 13:44:32.008709072 +0000 UTC m=+0.186613339 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 08:44:32 np0005592159 python3.9[125067]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089471.117775-478-201123633362601/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:32 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:32 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:32 np0005592159 podman[125195]: 2026-01-22 13:44:32.53248876 +0000 UTC m=+0.050203806 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:44:32 np0005592159 podman[125195]: 2026-01-22 13:44:32.543884355 +0000 UTC m=+0.061599391 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:44:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:32.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:32 np0005592159 podman[125331]: 2026-01-22 13:44:32.811185744 +0000 UTC m=+0.103937295 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, release=1793, vendor=Red Hat, Inc., io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vcs-type=git, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 22 08:44:32 np0005592159 podman[125331]: 2026-01-22 13:44:32.825792665 +0000 UTC m=+0.118544236 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, version=2.2.4, architecture=x86_64, vcs-type=git, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, name=keepalived, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20)
Jan 22 08:44:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:32.876+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:33 np0005592159 python3.9[125419]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:33 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 464 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:33.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:33 np0005592159 python3.9[125660]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089472.5185363-523-15736659686305/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:33.853+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:34 np0005592159 python3.9[125828]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:34 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:44:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:44:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:34.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:34.838+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:34 np0005592159 python3.9[125954]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089473.962673-569-3361112911399/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:35.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:35 np0005592159 python3.9[126106]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:35.850+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:36 np0005592159 python3.9[126231]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089475.3304865-613-8667519708261/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:36.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:36.813+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:37 np0005592159 python3.9[126384]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:37.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:37.778+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:37 np0005592159 python3.9[126536]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:38.801+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:38.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:38 np0005592159 python3.9[126692]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:39.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:39 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 469 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:39 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:39.767+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:39 np0005592159 python3.9[126844]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:40 np0005592159 python3.9[127048]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:44:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:40.810+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:40 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:40.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:44:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:41.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:41 np0005592159 python3.9[127202]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:41.809+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:42 np0005592159 python3.9[127357]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:42.777+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:42.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:43.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:43 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:43 np0005592159 python3.9[127508]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:44:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:43.771+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:44.771+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:44:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:44.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:44:45 np0005592159 python3.9[127662]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:45 np0005592159 ovs-vsctl[127663]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 22 08:44:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:45.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:45.734+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:46 np0005592159 python3.9[127815]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:46.728+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:46 np0005592159 python3.9[127971]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:44:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:46.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:46 np0005592159 ovs-vsctl[127972]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 22 08:44:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:47.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:47.728+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:47 np0005592159 python3.9[128122]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:44:48 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:48 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 474 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:48.733+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:48 np0005592159 python3.9[128277]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:48.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:49 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:49.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:49 np0005592159 python3.9[128431]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:49.743+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:49 np0005592159 python3.9[128509]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:50 np0005592159 python3.9[128712]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:50.707+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:44:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:50.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:44:51 np0005592159 python3.9[128790]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:44:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:51.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:51 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:51 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:51.749+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:51 np0005592159 python3.9[128942]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:52.721+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:52 np0005592159 python3.9[129095]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:52.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:53 np0005592159 python3.9[129173]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:53.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:53.757+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:54 np0005592159 python3.9[129325]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:54 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:54 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 484 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:54 np0005592159 python3.9[129404]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:54.742+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:54.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:55 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:55.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:55 np0005592159 python3.9[129556]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:44:55 np0005592159 systemd[1]: Reloading.
Jan 22 08:44:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:55.709+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:55 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:44:55 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:44:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:44:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:56.691+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:56.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:57 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:57.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:57.677+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:58 np0005592159 python3.9[129745]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:58 np0005592159 python3.9[129823]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:58.693+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:44:58.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:59 np0005592159 python3.9[129978]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:44:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:44:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:44:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:44:59.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:44:59 np0005592159 python3.9[130056]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:44:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:44:59 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 489 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:44:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:44:59.731+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:44:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:00 np0005592159 python3.9[130209]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:45:00 np0005592159 systemd[1]: Reloading.
Jan 22 08:45:00 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:45:00 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:45:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:00.770+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:00.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:01 np0005592159 systemd[1]: Starting Create netns directory...
Jan 22 08:45:01 np0005592159 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 08:45:01 np0005592159 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 08:45:01 np0005592159 systemd[1]: Finished Create netns directory.
Jan 22 08:45:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:01.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:01 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:01.792+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:02 np0005592159 python3.9[130403]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:02.803+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:02.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:02 np0005592159 python3.9[130556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:03.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:03 np0005592159 python3.9[130679]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089502.4662876-1366-12125711191895/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:03 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:03 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:03.775+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:04.762+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:04 np0005592159 python3.9[130832]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:04.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:05 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:05.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:05 np0005592159 python3.9[130984]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:05.783+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:06 np0005592159 python3.9[131136]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:06 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:45:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:06.776+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:06.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:06 np0005592159 python3.9[131261]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089505.9257116-1465-108793043498901/.source.json _original_basename=.t77mqyzy follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:07.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.690275) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507690369, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2364, "num_deletes": 251, "total_data_size": 4762286, "memory_usage": 4808880, "flush_reason": "Manual Compaction"}
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507720456, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 3097044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10297, "largest_seqno": 12656, "table_properties": {"data_size": 3088227, "index_size": 5055, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 21978, "raw_average_key_size": 20, "raw_value_size": 3068799, "raw_average_value_size": 2919, "num_data_blocks": 220, "num_entries": 1051, "num_filter_entries": 1051, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089343, "oldest_key_time": 1769089343, "file_creation_time": 1769089507, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 30227 microseconds, and 7083 cpu microseconds.
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.720512) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 3097044 bytes OK
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.720528) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.723871) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.723912) EVENT_LOG_v1 {"time_micros": 1769089507723903, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.723931) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 4751571, prev total WAL file size 4751571, number of live WAL files 2.
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.725579) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(3024KB)], [21(7706KB)]
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507725617, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 10988923, "oldest_snapshot_seqno": -1}
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4557 keys, 8311586 bytes, temperature: kUnknown
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507785230, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 8311586, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8279760, "index_size": 19300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11461, "raw_key_size": 112570, "raw_average_key_size": 24, "raw_value_size": 8195764, "raw_average_value_size": 1798, "num_data_blocks": 819, "num_entries": 4557, "num_filter_entries": 4557, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089507, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.785586) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8311586 bytes
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.787531) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 183.7 rd, 138.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.5 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(6.2) write-amplify(2.7) OK, records in: 5076, records dropped: 519 output_compression: NoCompression
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.787569) EVENT_LOG_v1 {"time_micros": 1769089507787555, "job": 10, "event": "compaction_finished", "compaction_time_micros": 59828, "compaction_time_cpu_micros": 19372, "output_level": 6, "num_output_files": 1, "total_output_size": 8311586, "num_input_records": 5076, "num_output_records": 4557, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:45:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:07.787+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507788855, "job": 10, "event": "table_file_deletion", "file_number": 23}
Jan 22 08:45:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089507790089, "job": 10, "event": "table_file_deletion", "file_number": 21}
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.725016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.790170) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.790175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.790176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.790178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:45:07.790179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:45:07 np0005592159 python3.9[131411]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:08 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 494 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:08.836+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:45:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:08.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:45:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:09.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:09 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:09.842+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:10 np0005592159 python3.9[131886]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 22 08:45:10 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:45:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:10.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:45:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:10.855+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:11.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:11 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:11.829+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:12 np0005592159 python3.9[132038]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 08:45:12 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:12 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 504 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:12.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:12.869+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:13.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:13.888+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:14 np0005592159 python3[132191]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 08:45:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:14.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:14.931+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:15.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:15 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:15.888+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:16.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:16.905+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:17.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:17.881+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:18 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:18.847+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:18.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:19.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:19 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 509 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:19.806+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:20 np0005592159 podman[132203]: 2026-01-22 13:45:20.031159846 +0000 UTC m=+5.779026092 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 08:45:20 np0005592159 podman[132332]: 2026-01-22 13:45:20.157366555 +0000 UTC m=+0.048029427 container create 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 08:45:20 np0005592159 podman[132332]: 2026-01-22 13:45:20.132233602 +0000 UTC m=+0.022896494 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 08:45:20 np0005592159 python3[132191]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 22 08:45:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:20.847+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:20.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:21 np0005592159 python3.9[132523]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:45:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:21.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:21.821+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:21 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:22.781+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:22.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:23 np0005592159 python3.9[132678]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:23.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:23 np0005592159 python3.9[132754]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:45:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:23.777+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:24.823+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:24.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:24 np0005592159 python3.9[132911]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769089523.5854223-1699-162723346994107/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:25.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:25 np0005592159 python3.9[132987]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:45:25 np0005592159 systemd[1]: Reloading.
Jan 22 08:45:25 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:45:25 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:45:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:25.802+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:26 np0005592159 python3.9[133099]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:45:26 np0005592159 systemd[1]: Reloading.
Jan 22 08:45:26 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:45:26 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:45:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:26.818+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:26.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:27 np0005592159 systemd[1]: Starting ovn_controller container...
Jan 22 08:45:27 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:27 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:27.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:27 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:45:27 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1734721b55cc982c684897978a32ef7483dd133591a02eac7552c372dda4a22e/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 22 08:45:27 np0005592159 systemd[1]: Started /usr/bin/podman healthcheck run 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356.
Jan 22 08:45:27 np0005592159 podman[133141]: 2026-01-22 13:45:27.478365841 +0000 UTC m=+0.331778876 container init 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: + sudo -E kolla_set_configs
Jan 22 08:45:27 np0005592159 podman[133141]: 2026-01-22 13:45:27.503160955 +0000 UTC m=+0.356573980 container start 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 08:45:27 np0005592159 edpm-start-podman-container[133141]: ovn_controller
Jan 22 08:45:27 np0005592159 systemd[1]: Created slice User Slice of UID 0.
Jan 22 08:45:27 np0005592159 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 22 08:45:27 np0005592159 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 22 08:45:27 np0005592159 systemd[1]: Starting User Manager for UID 0...
Jan 22 08:45:27 np0005592159 edpm-start-podman-container[133140]: Creating additional drop-in dependency for "ovn_controller" (8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356)
Jan 22 08:45:27 np0005592159 podman[133163]: 2026-01-22 13:45:27.570285273 +0000 UTC m=+0.057938583 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 08:45:27 np0005592159 systemd[1]: 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356-44e4d69ad703dadb.service: Main process exited, code=exited, status=1/FAILURE
Jan 22 08:45:27 np0005592159 systemd[1]: 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356-44e4d69ad703dadb.service: Failed with result 'exit-code'.
Jan 22 08:45:27 np0005592159 systemd[1]: Reloading.
Jan 22 08:45:27 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:45:27 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:45:27 np0005592159 systemd[133194]: Queued start job for default target Main User Target.
Jan 22 08:45:27 np0005592159 systemd[133194]: Created slice User Application Slice.
Jan 22 08:45:27 np0005592159 systemd[133194]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 22 08:45:27 np0005592159 systemd[133194]: Started Daily Cleanup of User's Temporary Directories.
Jan 22 08:45:27 np0005592159 systemd[133194]: Reached target Paths.
Jan 22 08:45:27 np0005592159 systemd[133194]: Reached target Timers.
Jan 22 08:45:27 np0005592159 systemd[133194]: Starting D-Bus User Message Bus Socket...
Jan 22 08:45:27 np0005592159 systemd[133194]: Starting Create User's Volatile Files and Directories...
Jan 22 08:45:27 np0005592159 systemd[133194]: Finished Create User's Volatile Files and Directories.
Jan 22 08:45:27 np0005592159 systemd[133194]: Listening on D-Bus User Message Bus Socket.
Jan 22 08:45:27 np0005592159 systemd[133194]: Reached target Sockets.
Jan 22 08:45:27 np0005592159 systemd[133194]: Reached target Basic System.
Jan 22 08:45:27 np0005592159 systemd[133194]: Reached target Main User Target.
Jan 22 08:45:27 np0005592159 systemd[133194]: Startup finished in 133ms.
Jan 22 08:45:27 np0005592159 systemd[1]: Started User Manager for UID 0.
Jan 22 08:45:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:27.837+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:27 np0005592159 systemd[1]: Started ovn_controller container.
Jan 22 08:45:27 np0005592159 systemd[1]: Started Session c1 of User root.
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: INFO:__main__:Validating config file
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: INFO:__main__:Writing out command to execute
Jan 22 08:45:27 np0005592159 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: ++ cat /run_command
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: + ARGS=
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: + sudo kolla_copy_cacerts
Jan 22 08:45:27 np0005592159 systemd[1]: Started Session c2 of User root.
Jan 22 08:45:27 np0005592159 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: + [[ ! -n '' ]]
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: + . kolla_extend_start
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: + umask 0022
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:27Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:27Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:27Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 22 08:45:27 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:27Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 22 08:45:28 np0005592159 NetworkManager[49000]: <info>  [1769089528.0071] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 22 08:45:28 np0005592159 NetworkManager[49000]: <info>  [1769089528.0078] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 08:45:28 np0005592159 NetworkManager[49000]: <warn>  [1769089528.0081] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 08:45:28 np0005592159 NetworkManager[49000]: <info>  [1769089528.0087] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 22 08:45:28 np0005592159 NetworkManager[49000]: <info>  [1769089528.0093] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 22 08:45:28 np0005592159 NetworkManager[49000]: <info>  [1769089528.0096] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 22 08:45:28 np0005592159 kernel: br-int: entered promiscuous mode
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00010|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00011|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00012|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00013|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00014|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00015|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 22 08:45:28 np0005592159 systemd-udevd[133285]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00017|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00018|features|INFO|OVS Feature: ct_flush, state: supported
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00019|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00020|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00021|main|INFO|OVS feature set changed, force recompute.
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 22 08:45:28 np0005592159 NetworkManager[49000]: <info>  [1769089528.0619] manager: (ovn-c803af-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 22 08:45:28 np0005592159 NetworkManager[49000]: <info>  [1769089528.0625] manager: (ovn-d9fd1e-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Jan 22 08:45:28 np0005592159 kernel: genev_sys_6081: entered promiscuous mode
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 08:45:28 np0005592159 NetworkManager[49000]: <info>  [1769089528.0800] device (genev_sys_6081): carrier: link connected
Jan 22 08:45:28 np0005592159 NetworkManager[49000]: <info>  [1769089528.0805] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/21)
Jan 22 08:45:28 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:28Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 22 08:45:28 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:28 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 514 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:28 np0005592159 NetworkManager[49000]: <info>  [1769089528.5387] manager: (ovn-7335e4-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 22 08:45:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:28.864+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:28.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:45:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2035 writes, 12K keys, 2035 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2035 writes, 2035 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2035 writes, 12K keys, 2035 commit groups, 1.0 writes per commit group, ingest: 23.75 MB, 0.04 MB/s#012Interval WAL: 2035 writes, 2035 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    117.1      0.13              0.03         5    0.025       0      0       0.0       0.0#012  L6      1/0    7.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.3    159.2    132.6      0.26              0.08         4    0.064     18K   1811       0.0       0.0#012 Sum      1/0    7.93 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3    106.7    127.5      0.38              0.11         9    0.042     18K   1811       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.3    107.7    128.7      0.38              0.11         8    0.047     18K   1811       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    159.2    132.6      0.26              0.08         4    0.064     18K   1811       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    120.4      0.12              0.03         4    0.031       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.014, interval 0.014#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 1.30 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 4.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(62,1.13 MB,0.37106%) FilterBlock(9,59.98 KB,0.0192692%) IndexBlock(9,116.08 KB,0.0372887%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 08:45:29 np0005592159 python3.9[133416]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 22 08:45:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:29.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:29.869+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:30 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:30 np0005592159 python3.9[133619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:30.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:30.909+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:31 np0005592159 python3.9[133742]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089530.0587435-1834-150249655630305/.source.yaml _original_basename=.yjmkrj2h follow=False checksum=46f66c8a157c96fcb7cc69848fe925e114c66b53 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:45:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:31.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:31.909+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:32 np0005592159 python3.9[133894]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:45:32 np0005592159 ovs-vsctl[133895]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 22 08:45:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:32.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:32.929+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:32 np0005592159 python3.9[134048]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:45:32 np0005592159 ovs-vsctl[134050]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 22 08:45:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:33.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:33 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:33.940+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:34 np0005592159 python3.9[134203]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:45:34 np0005592159 ovs-vsctl[134205]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 22 08:45:34 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:34 np0005592159 systemd-logind[787]: Session 45 logged out. Waiting for processes to exit.
Jan 22 08:45:34 np0005592159 systemd[1]: session-45.scope: Deactivated successfully.
Jan 22 08:45:34 np0005592159 systemd[1]: session-45.scope: Consumed 56.147s CPU time.
Jan 22 08:45:34 np0005592159 systemd-logind[787]: Removed session 45.
Jan 22 08:45:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:34.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:34.945+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:35.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:35.988+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:36.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:36.947+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:37.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:37.986+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:38 np0005592159 systemd[1]: Stopping User Manager for UID 0...
Jan 22 08:45:38 np0005592159 systemd[133194]: Activating special unit Exit the Session...
Jan 22 08:45:38 np0005592159 systemd[133194]: Stopped target Main User Target.
Jan 22 08:45:38 np0005592159 systemd[133194]: Stopped target Basic System.
Jan 22 08:45:38 np0005592159 systemd[133194]: Stopped target Paths.
Jan 22 08:45:38 np0005592159 systemd[133194]: Stopped target Sockets.
Jan 22 08:45:38 np0005592159 systemd[133194]: Stopped target Timers.
Jan 22 08:45:38 np0005592159 systemd[133194]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 22 08:45:38 np0005592159 systemd[133194]: Closed D-Bus User Message Bus Socket.
Jan 22 08:45:38 np0005592159 systemd[133194]: Stopped Create User's Volatile Files and Directories.
Jan 22 08:45:38 np0005592159 systemd[133194]: Removed slice User Application Slice.
Jan 22 08:45:38 np0005592159 systemd[133194]: Reached target Shutdown.
Jan 22 08:45:38 np0005592159 systemd[133194]: Finished Exit the Session.
Jan 22 08:45:38 np0005592159 systemd[133194]: Reached target Exit the Session.
Jan 22 08:45:38 np0005592159 systemd[1]: user@0.service: Deactivated successfully.
Jan 22 08:45:38 np0005592159 systemd[1]: Stopped User Manager for UID 0.
Jan 22 08:45:38 np0005592159 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 22 08:45:38 np0005592159 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 22 08:45:38 np0005592159 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 22 08:45:38 np0005592159 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 22 08:45:38 np0005592159 systemd[1]: Removed slice User Slice of UID 0.
Jan 22 08:45:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:38.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:38.980+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:39 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:39 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:39.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:39.948+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:40 np0005592159 systemd-logind[787]: New session 47 of user zuul.
Jan 22 08:45:40 np0005592159 systemd[1]: Started Session 47 of User zuul.
Jan 22 08:45:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:40.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:40.925+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:40 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:41 np0005592159 python3.9[134499]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:45:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:41.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:41.948+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 08:45:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 08:45:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 22 08:45:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:42 np0005592159 podman[134838]: 2026-01-22 13:45:42.862267146 +0000 UTC m=+0.063774569 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 08:45:42 np0005592159 python3.9[134807]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:42.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:42.902+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:42 np0005592159 podman[134838]: 2026-01-22 13:45:42.954437103 +0000 UTC m=+0.155944506 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 08:45:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:43.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:43 np0005592159 python3.9[135106]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:43 np0005592159 podman[135145]: 2026-01-22 13:45:43.623110091 +0000 UTC m=+0.065471555 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:45:43 np0005592159 podman[135145]: 2026-01-22 13:45:43.634207118 +0000 UTC m=+0.076568562 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 08:45:43 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:43 np0005592159 podman[135288]: 2026-01-22 13:45:43.846645257 +0000 UTC m=+0.054478290 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, build-date=2023-02-22T09:23:20, vcs-type=git, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, architecture=x86_64, description=keepalived for Ceph, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vendor=Red Hat, Inc.)
Jan 22 08:45:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:43.879+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:43 np0005592159 podman[135331]: 2026-01-22 13:45:43.948500015 +0000 UTC m=+0.084140035 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, architecture=x86_64, description=keepalived for Ceph, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, vendor=Red Hat, Inc., version=2.2.4, vcs-type=git, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 22 08:45:43 np0005592159 podman[135288]: 2026-01-22 13:45:43.954432773 +0000 UTC m=+0.162265796 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, release=1793, architecture=x86_64, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, distribution-scope=public)
Jan 22 08:45:44 np0005592159 python3.9[135395]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:44.861+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:44.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:44 np0005592159 python3.9[135664]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:45.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:45 np0005592159 python3.9[135830]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:45.864+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:46 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:46 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:46 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:46 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:45:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:46 np0005592159 python3.9[135982]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:45:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:46.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:46.895+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:45:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:47.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:47 np0005592159 python3.9[136134]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 22 08:45:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:47.865+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:48 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:48 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:48.821+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:45:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:48.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:45:49 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:49 np0005592159 python3.9[136285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:49.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:49.791+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:50 np0005592159 python3.9[136406]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089548.7588608-221-8133955253076/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:50.835+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:50.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:51 np0005592159 python3.9[136607]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:51.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:51 np0005592159 python3.9[136728]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089550.483311-266-116379788181895/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:51 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:51.830+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:45:52 np0005592159 python3.9[136881]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:45:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:52.806+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:52.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:53.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:53 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 544 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:53 np0005592159 python3.9[137015]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:45:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:53.828+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:54 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:54.822+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:54.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:55.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:55 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:55.773+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:56 np0005592159 python3.9[137169]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:45:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:45:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:56.775+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:56.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:57 np0005592159 python3.9[137323]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:45:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:57.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:45:57 np0005592159 python3.9[137444]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089556.6587877-377-7762467730545/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:45:57 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:57.825+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:57 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:57Z|00025|memory|INFO|16256 kB peak resident set size after 29.9 seconds
Jan 22 08:45:57 np0005592159 ovn_controller[133156]: 2026-01-22T13:45:57Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 22 08:45:57 np0005592159 podman[137445]: 2026-01-22 13:45:57.898515502 +0000 UTC m=+0.116224862 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 22 08:45:58 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 549 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:45:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:58.803+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 08:45:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:45:58.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 08:45:59 np0005592159 python3.9[137619]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:45:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:45:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 08:45:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:45:59.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 08:45:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:45:59.794+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:45:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:45:59 np0005592159 python3.9[137740]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089557.9259353-377-240621287087411/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:00.817+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 08:46:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:00.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 08:46:01 np0005592159 python3.9[137891]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:01.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:01 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:01.831+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:02 np0005592159 python3.9[138012]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089560.906417-510-276814229553118/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:02 np0005592159 python3.9[138163]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:02.824+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 08:46:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:02.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 08:46:03 np0005592159 python3.9[138284]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089562.3114493-510-185275370736306/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 08:46:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:03.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 08:46:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:03.853+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:03 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:04 np0005592159 python3.9[138434]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:46:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:04.826+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 08:46:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:04.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 08:46:05 np0005592159 python3.9[138589]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000009s ======
Jan 22 08:46:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:05.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Jan 22 08:46:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:05.806+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:05 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:06 np0005592159 python3.9[138741]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:06.773+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:06 np0005592159 python3.9[138820]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:06.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:07.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:07 np0005592159 python3.9[138972]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:07.797+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:07 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:07 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 554 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:08 np0005592159 python3.9[139050]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:08.795+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000010s ======
Jan 22 08:46:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:08.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Jan 22 08:46:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:09 np0005592159 python3.9[139203]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:09.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:09.810+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:09 np0005592159 python3.9[139355]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:10 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:10 np0005592159 python3.9[139434]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:10.768+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:10.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:11 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:11 np0005592159 python3.9[139636]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:11.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:11.803+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:11 np0005592159 python3.9[139714]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:12.812+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:12.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:13 np0005592159 python3.9[139867]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:46:13 np0005592159 systemd[1]: Reloading.
Jan 22 08:46:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:13 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:13 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:13.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:13.859+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:13 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 564 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:14 np0005592159 python3.9[140057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:14.828+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:14 np0005592159 python3.9[140136]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:14.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:15 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:15.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:15 np0005592159 python3.9[140288]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:15.791+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:16 np0005592159 python3.9[140366]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:16.792+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:16.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:17 np0005592159 python3.9[140519]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:46:17 np0005592159 systemd[1]: Reloading.
Jan 22 08:46:17 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:17 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:46:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:17.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:46:17 np0005592159 systemd[1]: Starting Create netns directory...
Jan 22 08:46:17 np0005592159 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 22 08:46:17 np0005592159 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 22 08:46:17 np0005592159 systemd[1]: Finished Create netns directory.
Jan 22 08:46:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:17.772+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:18 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:18 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:18 np0005592159 python3.9[140713]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:18.817+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:18.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:19 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 569 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:19 np0005592159 python3.9[140865]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:46:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:19.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:46:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:19.816+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:20 np0005592159 python3.9[140988]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769089578.8787742-963-52340102198638/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:20.787+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:46:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:20.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:46:21 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:21 np0005592159 python3.9[141141]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:46:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:21.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:46:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:21.775+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:21 np0005592159 python3.9[141293]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:46:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:22 np0005592159 python3.9[141446]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:22.763+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:22.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:23 np0005592159 python3.9[141569]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089582.1962845-1061-61746672393403/.source.json _original_basename=.4ru_1mkh follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:23.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:23.782+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:24 np0005592159 python3.9[141719]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:24.817+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:24.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:25.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:25.859+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:26 np0005592159 python3.9[142144]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 22 08:46:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:26.836+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:26.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:27 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:46:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:27.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:46:27 np0005592159 python3.9[142296]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 08:46:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:27.804+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:28 np0005592159 podman[142321]: 2026-01-22 13:46:28.07267083 +0000 UTC m=+0.120251118 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 22 08:46:28 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:28 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 574 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:28.798+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:28.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:29 np0005592159 python3[142475]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 08:46:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:29.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:29.772+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:30 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:30.754+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:30.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:46:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.5 total, 600.0 interval#012Cumulative writes: 4785 writes, 21K keys, 4785 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 4785 writes, 607 syncs, 7.88 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4785 writes, 21K keys, 4785 commit groups, 1.0 writes per commit group, ingest: 18.18 MB, 0.03 MB/s#012Interval WAL: 4785 writes, 607 syncs, 7.88 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.5 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Jan 22 08:46:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:46:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:31.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:46:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:31.761+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:32.723+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:32.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:33.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:33 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 584 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:33.757+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:34.807+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:46:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:34.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:46:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:35.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:35.804+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:36.771+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:36.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:46:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:37.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:46:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:37 np0005592159 podman[142489]: 2026-01-22 13:46:37.669903157 +0000 UTC m=+8.531817029 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 08:46:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:37.786+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:37 np0005592159 podman[142672]: 2026-01-22 13:46:37.834539254 +0000 UTC m=+0.049362003 container create 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 08:46:37 np0005592159 podman[142672]: 2026-01-22 13:46:37.810017622 +0000 UTC m=+0.024840381 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 08:46:37 np0005592159 python3[142475]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 08:46:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:38.739+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:38.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:39.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:39 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:39 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 589 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:39.716+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:40 np0005592159 python3.9[142863]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:46:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:40 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:40.765+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:40.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:41 np0005592159 python3.9[143018]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:41 np0005592159 python3.9[143094]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:46:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:46:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:41.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:46:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:41.718+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:42 np0005592159 python3.9[143245]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769089601.5686336-1295-31270107419934/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:42.755+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:42 np0005592159 python3.9[143322]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:46:42 np0005592159 systemd[1]: Reloading.
Jan 22 08:46:42 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:42.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:42 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:43.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:43.761+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:44 np0005592159 python3.9[143435]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:46:44 np0005592159 systemd[1]: Reloading.
Jan 22 08:46:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:44.765+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:44 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:44 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:46:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:44.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:46:45 np0005592159 systemd[1]: Starting ovn_metadata_agent container...
Jan 22 08:46:45 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:46:45 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b9657b1dcd91b4246a3241bc74c99303fc9f2fa9d335018691a9ddb1987399/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:45 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4b9657b1dcd91b4246a3241bc74c99303fc9f2fa9d335018691a9ddb1987399/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 08:46:45 np0005592159 systemd[1]: Started /usr/bin/podman healthcheck run 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d.
Jan 22 08:46:45 np0005592159 podman[143476]: 2026-01-22 13:46:45.225844315 +0000 UTC m=+0.154364906 container init 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: + sudo -E kolla_set_configs
Jan 22 08:46:45 np0005592159 podman[143476]: 2026-01-22 13:46:45.256098949 +0000 UTC m=+0.184619510 container start 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 08:46:45 np0005592159 edpm-start-podman-container[143476]: ovn_metadata_agent
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Validating config file
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Copying service configuration files
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Writing out command to execute
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: ++ cat /run_command
Jan 22 08:46:45 np0005592159 edpm-start-podman-container[143475]: Creating additional drop-in dependency for "ovn_metadata_agent" (65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d)
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: + CMD=neutron-ovn-metadata-agent
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: + ARGS=
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: + sudo kolla_copy_cacerts
Jan 22 08:46:45 np0005592159 podman[143499]: 2026-01-22 13:46:45.341273784 +0000 UTC m=+0.069334614 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 08:46:45 np0005592159 systemd[1]: Reloading.
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: + [[ ! -n '' ]]
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: + . kolla_extend_start
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: Running command: 'neutron-ovn-metadata-agent'
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: + umask 0022
Jan 22 08:46:45 np0005592159 ovn_metadata_agent[143492]: + exec neutron-ovn-metadata-agent
Jan 22 08:46:45 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:45 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:45.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:45 np0005592159 systemd[1]: Started ovn_metadata_agent container.
Jan 22 08:46:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:45.738+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:46 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:46.776+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:46 np0005592159 python3.9[143732]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 22 08:46:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:46.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.094 143497 INFO neutron.common.config [-] Logging enabled!#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.094 143497 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.094 143497 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.095 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.095 143497 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.095 143497 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.096 143497 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.096 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.096 143497 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.096 143497 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.096 143497 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.096 143497 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.096 143497 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.097 143497 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.097 143497 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.097 143497 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.097 143497 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.097 143497 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.097 143497 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.098 143497 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.099 143497 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.099 143497 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.099 143497 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.099 143497 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.099 143497 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.099 143497 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.100 143497 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.100 143497 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.100 143497 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.100 143497 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.100 143497 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.100 143497 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.100 143497 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.101 143497 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.101 143497 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.101 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.101 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.101 143497 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.101 143497 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.101 143497 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.102 143497 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.102 143497 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.102 143497 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.102 143497 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.102 143497 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.102 143497 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.102 143497 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.103 143497 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.103 143497 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.103 143497 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.103 143497 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.103 143497 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.103 143497 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.104 143497 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.104 143497 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.104 143497 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.104 143497 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.104 143497 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.104 143497 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.104 143497 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.105 143497 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.105 143497 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.105 143497 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.105 143497 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.105 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.105 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.105 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.106 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.106 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.106 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.106 143497 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.106 143497 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.106 143497 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.106 143497 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.107 143497 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.107 143497 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.107 143497 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.107 143497 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.107 143497 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.107 143497 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.108 143497 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.108 143497 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.108 143497 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.108 143497 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.108 143497 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.108 143497 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.108 143497 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.109 143497 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.110 143497 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.110 143497 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.110 143497 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.110 143497 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.110 143497 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.110 143497 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.110 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.111 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.111 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.111 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.111 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.111 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.111 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.111 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.112 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.112 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.112 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.112 143497 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.112 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.112 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.113 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.113 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.113 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.113 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.113 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.113 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.114 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.114 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.114 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.114 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.114 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.114 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.114 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.115 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.115 143497 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.115 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.115 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.115 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.115 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.116 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.116 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.116 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.116 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.116 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.116 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.116 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.117 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.117 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.117 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.117 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.117 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.117 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.117 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.118 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.118 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.118 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.118 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.118 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.118 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.118 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.119 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.119 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.119 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.119 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.119 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.119 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.119 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.120 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.120 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.120 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.120 143497 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.120 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.120 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.121 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.121 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.121 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.121 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.121 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.121 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.121 143497 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.122 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.122 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.122 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.122 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.122 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.122 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.122 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.123 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.123 143497 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.123 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.123 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.123 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.123 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.123 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.124 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.124 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.124 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.124 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.124 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.124 143497 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.125 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.125 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.125 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.125 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.125 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.125 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.125 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.126 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.126 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.126 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.126 143497 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.126 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.126 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.126 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.127 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.127 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.127 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.127 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.127 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.127 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.127 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.128 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.128 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.128 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.128 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.128 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.128 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.128 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.129 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.129 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.129 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.129 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.129 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.129 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.129 143497 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.130 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.130 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.130 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.130 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.130 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.130 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.131 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.131 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.131 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.131 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.131 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.131 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.131 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.132 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.132 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.132 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.132 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.132 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.132 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.132 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.133 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.133 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.133 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.133 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.133 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.133 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.134 143497 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.134 143497 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.134 143497 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.134 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.134 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.134 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.135 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.135 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.135 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.135 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.135 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.135 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.135 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.136 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.136 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.136 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.136 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.136 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.136 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.136 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.137 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.137 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.137 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.137 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.137 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.137 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.138 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.138 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.138 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.138 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.138 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.138 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.138 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.139 143497 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.149 143497 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.149 143497 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.149 143497 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.149 143497 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.150 143497 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.162 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c4fa18b6-ed0f-47ac-8eec-d1399749aa8e (UUID: c4fa18b6-ed0f-47ac-8eec-d1399749aa8e) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.191 143497 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.192 143497 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.192 143497 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.192 143497 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.197 143497 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.202 143497 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.208 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c4fa18b6-ed0f-47ac-8eec-d1399749aa8e'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], external_ids={}, name=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, nb_cfg_timestamp=1769089536027, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.210 143497 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7ff0fc0dcf70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.211 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.211 143497 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.211 143497 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.211 143497 INFO oslo_service.service [-] Starting 1 workers#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.215 143497 DEBUG oslo_service.service [-] Started child 143757 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.219 143497 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp405dvk24/privsep.sock']#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.219 143757 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-230623'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.242 143757 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.242 143757 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.243 143757 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.246 143757 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.251 143757 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.257 143757 INFO eventlet.wsgi.server [-] (143757) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Jan 22 08:46:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:46:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:47.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:46:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:47.766+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:47 np0005592159 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.895 143497 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.896 143497 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp405dvk24/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.774 143856 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.778 143856 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.780 143856 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.780 143856 INFO oslo.privsep.daemon [-] privsep daemon running as pid 143856#033[00m
Jan 22 08:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:47.898 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[95d2790d-eaff-43ee-b037-c52c2acd3d99]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 08:46:48 np0005592159 python3.9[143890]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:46:48 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:48.469 143856 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:46:48 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:48.469 143856 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:46:48 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:48.470 143856 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:46:48 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:48 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 594 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:48 np0005592159 python3.9[144020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089607.5637686-1431-46749925088289/.source.yaml _original_basename=.xq1exs8a follow=False checksum=a7c93daf1344287e5303b3d1648c714a9349cb4e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:46:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:48.788+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:48.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.180 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8b9dbb-995c-41e0-adbb-ea73c107a937]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.183 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, column=external_ids, values=({'neutron:ovn-metadata-id': '8451296e-09c6-52d3-9638-e3d9fe7a5f53'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.194 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.200 143497 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.201 143497 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.201 143497 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.201 143497 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.201 143497 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.201 143497 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.201 143497 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.201 143497 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.202 143497 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.202 143497 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.202 143497 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.202 143497 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.202 143497 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.202 143497 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.202 143497 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.203 143497 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.203 143497 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.203 143497 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.203 143497 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.203 143497 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.203 143497 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.204 143497 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.204 143497 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.204 143497 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.204 143497 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.204 143497 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.204 143497 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.204 143497 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.205 143497 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.205 143497 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.205 143497 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.205 143497 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.205 143497 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.205 143497 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.206 143497 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.206 143497 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.206 143497 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.206 143497 DEBUG oslo_service.service [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.206 143497 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.206 143497 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.207 143497 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.207 143497 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.207 143497 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.207 143497 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.207 143497 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.207 143497 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.207 143497 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.208 143497 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.209 143497 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.209 143497 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.209 143497 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.209 143497 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.209 143497 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.209 143497 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.209 143497 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.210 143497 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.210 143497 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.210 143497 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.210 143497 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.210 143497 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.210 143497 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.210 143497 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.211 143497 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.212 143497 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.212 143497 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.212 143497 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.212 143497 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.212 143497 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.212 143497 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.212 143497 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.213 143497 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.213 143497 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.213 143497 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.213 143497 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.213 143497 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.213 143497 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.213 143497 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.214 143497 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.215 143497 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.216 143497 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.216 143497 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.216 143497 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.216 143497 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.216 143497 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.216 143497 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.217 143497 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.217 143497 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.217 143497 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.217 143497 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.217 143497 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.217 143497 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.217 143497 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.218 143497 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.218 143497 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.218 143497 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.218 143497 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.218 143497 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.218 143497 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.218 143497 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.219 143497 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.219 143497 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.219 143497 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.219 143497 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.219 143497 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.219 143497 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.219 143497 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.220 143497 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.220 143497 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.220 143497 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.220 143497 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.220 143497 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.220 143497 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.220 143497 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.221 143497 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.221 143497 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.221 143497 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.221 143497 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.221 143497 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.221 143497 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.221 143497 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.222 143497 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.223 143497 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.224 143497 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.224 143497 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.224 143497 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.224 143497 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.224 143497 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.224 143497 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.224 143497 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.225 143497 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.225 143497 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.225 143497 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.225 143497 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.225 143497 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.225 143497 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.225 143497 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.226 143497 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.226 143497 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.226 143497 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.226 143497 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.226 143497 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.226 143497 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.226 143497 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.227 143497 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.228 143497 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.228 143497 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.228 143497 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.228 143497 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.228 143497 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.228 143497 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.228 143497 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.229 143497 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.229 143497 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.229 143497 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.229 143497 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.229 143497 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.229 143497 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.229 143497 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.230 143497 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.231 143497 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.232 143497 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.233 143497 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.233 143497 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.233 143497 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.233 143497 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.233 143497 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.233 143497 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.233 143497 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.234 143497 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.235 143497 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.235 143497 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.235 143497 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.235 143497 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.235 143497 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.235 143497 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.235 143497 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.236 143497 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.237 143497 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.237 143497 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.237 143497 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.237 143497 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.237 143497 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.237 143497 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.238 143497 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.238 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.238 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.238 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.238 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.238 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.238 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.239 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.239 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.239 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.239 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.239 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.239 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.239 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.240 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.241 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.241 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.241 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.241 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.241 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.241 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.241 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.242 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.242 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.242 143497 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.242 143497 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.242 143497 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.242 143497 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.242 143497 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:46:49 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:46:49.243 143497 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 08:46:49 np0005592159 systemd[1]: session-47.scope: Deactivated successfully.
Jan 22 08:46:49 np0005592159 systemd[1]: session-47.scope: Consumed 57.988s CPU time.
Jan 22 08:46:49 np0005592159 systemd-logind[787]: Session 47 logged out. Waiting for processes to exit.
Jan 22 08:46:49 np0005592159 systemd-logind[787]: Removed session 47.
Jan 22 08:46:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:49.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:49 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:49.810+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:50.818+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:50.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:46:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:51.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:46:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:51 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:51.815+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:52.807+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:52.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:52 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 604 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:46:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:46:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:53.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:46:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:53.782+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:54 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:46:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:46:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:46:54 np0005592159 systemd-logind[787]: New session 48 of user zuul.
Jan 22 08:46:54 np0005592159 systemd[1]: Started Session 48 of User zuul.
Jan 22 08:46:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:54.794+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:46:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:54.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:46:55 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:55.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:55 np0005592159 python3.9[144382]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:46:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:55.838+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:46:56 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 22 08:46:56 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Jan 22 08:46:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:56.864+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:56 np0005592159 python3.9[144539]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:46:56 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 22 08:46:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:56.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:56 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 22 08:46:56 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 22 08:46:57 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Jan 22 08:46:57 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:57 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 22 08:46:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:46:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:57.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:46:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:57.836+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:58 np0005592159 python3.9[144704]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:46:58 np0005592159 systemd[1]: Reloading.
Jan 22 08:46:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:58 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:46:58 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:46:58 np0005592159 podman[144707]: 2026-01-22 13:46:58.538352908 +0000 UTC m=+0.131344026 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 08:46:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:46:58.873+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:46:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:46:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:46:58.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:46:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:46:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:46:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:46:59.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:46:59 np0005592159 python3.9[144917]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:46:59 np0005592159 network[144934]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:46:59 np0005592159 network[144935]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:46:59 np0005592159 network[144936]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:46:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:46:59 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 609 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:00.818+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:00.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000058s ======
Jan 22 08:47:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:01.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000058s
Jan 22 08:47:01 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:47:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:47:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:01.798+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:02.769+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:02.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:03.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:03.732+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:04.710+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:04.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:05 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:05.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:05.684+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:06.641+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:06 np0005592159 python3.9[145254]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:06.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:07 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:07.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:07.665+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:08 np0005592159 python3.9[145407]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:08 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 614 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:08.686+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:08 np0005592159 python3.9[145561]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:08.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:09.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:09.676+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:10.718+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:10.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:11 np0005592159 python3.9[145714]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:11.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:11.760+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:11 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:12 np0005592159 python3.9[145919]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:12.740+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:12 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:12 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:12 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:12.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:13 np0005592159 python3.9[146073]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:13.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:13.715+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:13 np0005592159 python3.9[146226]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:47:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:13 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 619 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:14.668+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:14.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:15 np0005592159 python3.9[146382]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:15 np0005592159 podman[146506]: 2026-01-22 13:47:15.526253571 +0000 UTC m=+0.060781452 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 08:47:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:15.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:15.657+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:15 np0005592159 python3.9[146548]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:15 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:16 np0005592159 python3.9[146705]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:16.664+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:16 np0005592159 python3.9[146858]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:16.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:17.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:17 np0005592159 python3.9[147010]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:17.673+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:18 np0005592159 python3.9[147162]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:18.702+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:18.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:19.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:19.729+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:19 np0005592159 python3.9[147315]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:19 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 624 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:20.738+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:20.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:21 np0005592159 python3.9[147468]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:21.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:21.696+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:21 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:22 np0005592159 python3.9[147620]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:22.662+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:22 np0005592159 python3.9[147773]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:22.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:23 np0005592159 python3.9[147925]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:23.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:23.623+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:24 np0005592159 python3.9[148077]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:24.639+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:25.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:25 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 634 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:25 np0005592159 python3.9[148230]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:25.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:25.644+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:25 np0005592159 python3.9[148382]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:47:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:26.663+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:26 np0005592159 python3.9[148535]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:27.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:27.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:27.673+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:28 np0005592159 python3.9[148687]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:47:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:28.642+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:28 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:28 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:29.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:29 np0005592159 podman[148788]: 2026-01-22 13:47:29.094549898 +0000 UTC m=+0.143710754 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 08:47:29 np0005592159 python3.9[148865]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:47:29 np0005592159 systemd[1]: Reloading.
Jan 22 08:47:29 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:47:29 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:47:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:29.618+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:29.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:29 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 639 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:30.623+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:30 np0005592159 python3.9[149056]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:30 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:31.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:31 np0005592159 python3.9[149209]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:31.651+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:47:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:31.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:47:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:32.612+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:33.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:33.629+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:33.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:33 np0005592159 python3.9[149412]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:34 np0005592159 python3.9[149567]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:34.627+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:35.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:35 np0005592159 python3.9[149720]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:35.621+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:35.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:36 np0005592159 python3.9[149873]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:36.668+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:37 np0005592159 python3.9[150027]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:47:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:37.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:37.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:37.709+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:37 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 644 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:38 np0005592159 python3.9[150180]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 22 08:47:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:38.743+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:39.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:39 np0005592159 python3.9[150334]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:47:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:39.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:39.776+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:40 np0005592159 python3.9[150493]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 08:47:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:40.754+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:40 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:41.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:41.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:41.799+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:42.824+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:42 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 654 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:42 np0005592159 python3.9[150654]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:47:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:43.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:43.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:43 np0005592159 python3.9[150738]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:47:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:43.857+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:43 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:44.815+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:45.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:45.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:45.780+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:46 np0005592159 podman[150747]: 2026-01-22 13:47:46.026183489 +0000 UTC m=+0.077113675 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 22 08:47:46 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:46.775+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:47.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:47:47.152 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:47:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:47:47.153 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:47:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:47:47.153 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:47:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:47.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:47.798+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:48 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:48.750+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:49.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:49 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:49 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 659 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:49.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:49.757+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:50.779+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:51.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:51.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:51.731+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:51 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:52.778+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:53.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:53.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:53.800+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:54.805+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:47:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:55.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:47:55 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:55.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:55.823+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:56.843+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:47:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:57.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:47:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:57.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:57 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:57.871+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:58.832+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:58 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 664 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:47:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:47:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 08:47:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:47:59.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 08:47:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:47:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:47:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:47:59.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:47:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:47:59.789+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:47:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:47:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:00 np0005592159 podman[150999]: 2026-01-22 13:48:00.059455635 +0000 UTC m=+0.124432543 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 08:48:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:00.754+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:01.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:01.711+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 08:48:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:01.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 08:48:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:02.732+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:03.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:03 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:48:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:48:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:48:03 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 674 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:03.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:03.769+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:04.818+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:05.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:05 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:05.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:05.793+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:06.837+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:07.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:07 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:07.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:07.789+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:08.759+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:09.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:09 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:09 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 679 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:09.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:09.765+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:10.729+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:11.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:11.715+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:11.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:12.674+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 08:48:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:13.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 08:48:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:13.707+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:13.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:14.724+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:15.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:15 np0005592159 ceph-mds[81154]: mds.beacon.cephfs.compute-2.zycvef missed beacon ack from the monitors
Jan 22 08:48:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:15.675+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:15.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:16 np0005592159 podman[151200]: 2026-01-22 13:48:16.201347653 +0000 UTC m=+0.051722933 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 08:48:16 np0005592159 kernel: SELinux:  Converting 2777 SID table entries...
Jan 22 08:48:16 np0005592159 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:48:16 np0005592159 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:48:16 np0005592159 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:48:16 np0005592159 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:48:16 np0005592159 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:48:16 np0005592159 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:48:16 np0005592159 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:48:16 np0005592159 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 22 08:48:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:16.667+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 08:48:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:17.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 08:48:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:48:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:48:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:17.716+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:17.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:18 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:18.708+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:19.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:19.661+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:19.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:20.637+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:21 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:21 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:21.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:21.635+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:21.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:22.637+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:23 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 694 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:23.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:23.597+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:23.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:24.559+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 08:48:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:25.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 08:48:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:25.538+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:25 np0005592159 kernel: SELinux:  Converting 2777 SID table entries...
Jan 22 08:48:25 np0005592159 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:48:25 np0005592159 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:48:25 np0005592159 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:48:25 np0005592159 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:48:25 np0005592159 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:48:25 np0005592159 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:48:25 np0005592159 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:48:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:25.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:26.546+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:27.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:27 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:27.511+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:27.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:28 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:28.560+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:29.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:29.550+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:29 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 699 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:29.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:30.510+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:30 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:30 np0005592159 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 22 08:48:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:31 np0005592159 podman[151317]: 2026-01-22 13:48:31.071463082 +0000 UTC m=+0.110263509 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:48:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:31.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:31.507+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:31.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:32.549+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:33.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:33.579+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:33.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:34.585+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 08:48:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:35.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 08:48:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:35.578+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 08:48:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:35.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 08:48:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:36.588+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:37.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:37.562+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:37.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:38.565+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:38 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 704 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:39.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:39.571+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:39.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:40.580+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:40 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 08:48:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:41.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 08:48:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:41.571+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:41.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:42.548+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.876545) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722876609, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 3098, "num_deletes": 507, "total_data_size": 5956441, "memory_usage": 6060048, "flush_reason": "Manual Compaction"}
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722913991, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 3880836, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12661, "largest_seqno": 15754, "table_properties": {"data_size": 3869661, "index_size": 6325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3781, "raw_key_size": 30436, "raw_average_key_size": 20, "raw_value_size": 3843024, "raw_average_value_size": 2563, "num_data_blocks": 276, "num_entries": 1499, "num_filter_entries": 1499, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089508, "oldest_key_time": 1769089508, "file_creation_time": 1769089722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 37584 microseconds, and 9893 cpu microseconds.
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.914140) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 3880836 bytes OK
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.914185) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.916323) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.916342) EVENT_LOG_v1 {"time_micros": 1769089722916337, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.916362) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 5941628, prev total WAL file size 5941628, number of live WAL files 2.
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.918010) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(3789KB)], [24(8116KB)]
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722918071, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 12192422, "oldest_snapshot_seqno": -1}
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 5025 keys, 10032301 bytes, temperature: kUnknown
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722994495, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 10032301, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9996709, "index_size": 21914, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 125757, "raw_average_key_size": 25, "raw_value_size": 9903583, "raw_average_value_size": 1970, "num_data_blocks": 912, "num_entries": 5025, "num_filter_entries": 5025, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089722, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.994856) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 10032301 bytes
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.996563) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.3 rd, 131.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.7, 7.9 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(5.7) write-amplify(2.6) OK, records in: 6056, records dropped: 1031 output_compression: NoCompression
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.996595) EVENT_LOG_v1 {"time_micros": 1769089722996580, "job": 12, "event": "compaction_finished", "compaction_time_micros": 76527, "compaction_time_cpu_micros": 21766, "output_level": 6, "num_output_files": 1, "total_output_size": 10032301, "num_input_records": 6056, "num_output_records": 5025, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:48:42 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089722998259, "job": 12, "event": "table_file_deletion", "file_number": 26}
Jan 22 08:48:43 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:48:43 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089723001212, "job": 12, "event": "table_file_deletion", "file_number": 24}
Jan 22 08:48:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:42.917913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:43.001356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:43.001379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:43.001383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:43.001386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:48:43.001390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:48:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 08:48:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:43.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 08:48:43 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:43 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:43 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:43.545+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:43.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:44.519+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:45.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:45.529+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:45.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:46.500+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:46 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:47 np0005592159 podman[157261]: 2026-01-22 13:48:47.003243318 +0000 UTC m=+0.059121756 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 08:48:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:47.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:48:47.153 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:48:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:48:47.154 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:48:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:48:47.154 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:48:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:47.530+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:47.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:48 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:48 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:48.528+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 08:48:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:49.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 08:48:49 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:49.549+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:49.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:50.523+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000063s ======
Jan 22 08:48:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:51.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000063s
Jan 22 08:48:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:51.517+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:51.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:51 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:52.519+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 08:48:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:53.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 08:48:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:53.543+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:53.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:54 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:54.516+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:55.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:55 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:55.508+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:55.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:48:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:56.625+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 08:48:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:57.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 08:48:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:57.659+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:57.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:57 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:58.681+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:58 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:48:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:48:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:48:59.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:48:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:48:59.728+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:48:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:48:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:48:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:48:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:48:59.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:48:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:00.696+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:01 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:49:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:01.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:49:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:01.648+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:01.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:02 np0005592159 podman[167001]: 2026-01-22 13:49:02.07928568 +0000 UTC m=+0.120599294 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 08:49:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:02.603+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:03.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:03.618+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:03 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:03 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:03 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:03.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:04.608+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:05.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:05.578+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 08:49:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:05.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 08:49:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:05 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:06.622+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:07.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:07 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:07.594+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:07.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:08.599+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:09.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:09.616+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:09.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:09 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:09 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:09 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:10.644+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 08:49:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:11.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 08:49:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:11.659+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:11.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:12.638+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 08:49:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:13.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 08:49:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:13.673+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:13.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:14.686+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:15.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:15 np0005592159 irqbalance[785]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 22 08:49:15 np0005592159 irqbalance[785]: IRQ 26 affinity is now unmanaged
Jan 22 08:49:15 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:15.717+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:15.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:16.710+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:17.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:17.718+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:17.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:17 np0005592159 podman[168572]: 2026-01-22 13:49:17.997366316 +0000 UTC m=+0.053529071 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 08:49:18 np0005592159 kernel: SELinux:  Converting 2778 SID table entries...
Jan 22 08:49:18 np0005592159 kernel: SELinux:  policy capability network_peer_controls=1
Jan 22 08:49:18 np0005592159 kernel: SELinux:  policy capability open_perms=1
Jan 22 08:49:18 np0005592159 kernel: SELinux:  policy capability extended_socket_class=1
Jan 22 08:49:18 np0005592159 kernel: SELinux:  policy capability always_check_network=0
Jan 22 08:49:18 np0005592159 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 22 08:49:18 np0005592159 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 22 08:49:18 np0005592159 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 22 08:49:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:18.741+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 08:49:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:49:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:49:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:49:19 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:19.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:19.725+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:19 np0005592159 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 22 08:49:19 np0005592159 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 22 08:49:19 np0005592159 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 22 08:49:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 08:49:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:19.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 08:49:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:20.728+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:21.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:21.777+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 08:49:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:21.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 08:49:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:22.814+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:23.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:23.777+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000037s ======
Jan 22 08:49:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:23.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000037s
Jan 22 08:49:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:24 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:24.819+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:25.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:25.812+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:25.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:26.789+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 08:49:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:27.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 08:49:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:27.766+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 08:49:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:27.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 08:49:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:28.769+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 08:49:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:29.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 08:49:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:29.735+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 08:49:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:29.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 08:49:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:29 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 754 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:30.759+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:31 np0005592159 systemd[1]: Stopping OpenSSH server daemon...
Jan 22 08:49:31 np0005592159 systemd[1]: sshd.service: Deactivated successfully.
Jan 22 08:49:31 np0005592159 systemd[1]: Stopped OpenSSH server daemon.
Jan 22 08:49:31 np0005592159 systemd[1]: sshd.service: Consumed 3.916s CPU time, read 32.0K from disk, written 132.0K to disk.
Jan 22 08:49:31 np0005592159 systemd[1]: Stopped target sshd-keygen.target.
Jan 22 08:49:31 np0005592159 systemd[1]: Stopping sshd-keygen.target...
Jan 22 08:49:31 np0005592159 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 08:49:31 np0005592159 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 08:49:31 np0005592159 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 22 08:49:31 np0005592159 systemd[1]: Reached target sshd-keygen.target.
Jan 22 08:49:31 np0005592159 systemd[1]: Starting OpenSSH server daemon...
Jan 22 08:49:31 np0005592159 systemd[1]: Started OpenSSH server daemon.
Jan 22 08:49:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:31.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:31.779+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:31.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:32 np0005592159 podman[169583]: 2026-01-22 13:49:32.196003457 +0000 UTC m=+0.071787665 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller)
Jan 22 08:49:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:32.782+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:33 np0005592159 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:49:33 np0005592159 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:49:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:33.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:33 np0005592159 systemd[1]: Reloading.
Jan 22 08:49:33 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:33 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:33 np0005592159 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:49:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:33.776+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 08:49:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:33.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 08:49:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:34.809+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:35.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:35.837+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:35.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:36 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 764 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:36.813+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:37.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:37 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:49:37 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:49:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:37.787+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:37.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:38.750+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:39.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:39 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:39 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 769 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:39.702+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 08:49:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:39.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 08:49:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:40.676+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:40 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:41.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:41.706+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:41 np0005592159 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:49:41 np0005592159 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:49:41 np0005592159 systemd[1]: man-db-cache-update.service: Consumed 11.231s CPU time.
Jan 22 08:49:41 np0005592159 systemd[1]: run-r34f66493f05c4848ac19fbbbaa195fd1.service: Deactivated successfully.
Jan 22 08:49:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:41.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:42.724+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000038s ======
Jan 22 08:49:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:43.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000038s
Jan 22 08:49:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:43.694+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:43 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:43.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:44.733+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:45.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:45.702+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:45.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:46.735+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:47.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:49:47.155 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:49:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:49:47.155 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:49:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:49:47.155 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:49:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:47.740+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:47.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:48.779+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:48 np0005592159 podman[178264]: 2026-01-22 13:49:48.989634285 +0000 UTC m=+0.051805048 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 08:49:49 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:49.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:49.776+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:49.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:50 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 774 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:50.788+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:51.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:51 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:51.819+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:51 np0005592159 python3.9[178412]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:49:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:51.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:51 np0005592159 systemd[1]: Reloading.
Jan 22 08:49:51 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:51 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:52.791+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:53 np0005592159 python3.9[178604]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:49:53 np0005592159 systemd[1]: Reloading.
Jan 22 08:49:53 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:53 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:53.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:53.780+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:53.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:54 np0005592159 python3.9[178795]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:49:54 np0005592159 systemd[1]: Reloading.
Jan 22 08:49:54 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:54 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:54 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:54 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:54 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:54.777+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:55 np0005592159 python3.9[178986]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:49:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:55.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:55 np0005592159 systemd[1]: Reloading.
Jan 22 08:49:55 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:55 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:55 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:55.816+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:55.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:49:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:56.848+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:57.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:57 np0005592159 python3.9[179177]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:49:57 np0005592159 systemd[1]: Reloading.
Jan 22 08:49:57 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:57 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:57 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:57.825+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:49:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:57.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:49:58 np0005592159 python3.9[179416]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:49:58 np0005592159 systemd[1]: Reloading.
Jan 22 08:49:58 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:58 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:58 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:58.826+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:49:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:49:59.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:49:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:59 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:49:59 np0005592159 python3.9[179607]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:49:59 np0005592159 systemd[1]: Reloading.
Jan 22 08:49:59 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:49:59 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:49:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:49:59.868+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:49:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:49:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:49:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:49:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:49:59.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:00 np0005592159 python3.9[179797]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:00 np0005592159 ceph-mon[77081]: Health detail: HEALTH_WARN 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops
Jan 22 08:50:00 np0005592159 ceph-mon[77081]: [WRN] SLOW_OPS: 2 slow ops, oldest one blocked for 789 sec, osd.2 has slow ops
Jan 22 08:50:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:00.869+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:01.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:01 np0005592159 python3.9[179952]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:01 np0005592159 systemd[1]: Reloading.
Jan 22 08:50:01 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:50:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:01 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:50:01 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:01 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:01.868+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:01.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:02.917+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:03 np0005592159 podman[180016]: 2026-01-22 13:50:03.040239119 +0000 UTC m=+0.098199829 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 08:50:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:03.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:03.880+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:03.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:04.888+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:05.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:05 np0005592159 python3.9[180170]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 22 08:50:05 np0005592159 systemd[1]: Reloading.
Jan 22 08:50:05 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:50:05 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:50:05 np0005592159 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 22 08:50:05 np0005592159 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 22 08:50:05 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:05.882+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:05.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:06 np0005592159 python3.9[180364]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:06.838+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:50:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:07.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:50:07 np0005592159 python3.9[180520]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:07 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:07.843+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:07.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:08 np0005592159 python3.9[180675]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:08 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:08.802+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:09 np0005592159 python3.9[180831]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:09.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:09 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:09.804+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:50:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:09.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:50:09 np0005592159 python3.9[180986]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:10 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:10 np0005592159 python3.9[181142]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:10.798+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:11.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:11 np0005592159 python3.9[181297]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:11.792+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:11 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:11.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:12 np0005592159 python3.9[181452]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:12.834+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:12 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:13 np0005592159 python3.9[181608]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:13.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:13.793+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:50:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:13.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:50:13 np0005592159 python3.9[181763]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:14 np0005592159 python3.9[181919]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:14.769+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:15.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:15 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:15 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 804 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:15.723+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:15 np0005592159 python3.9[182074]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:15.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:16 np0005592159 python3.9[182229]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:16.705+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:50:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:17.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:50:17 np0005592159 python3.9[182385]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 22 08:50:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:17.748+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:17.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:18.709+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:19 np0005592159 podman[182563]: 2026-01-22 13:50:19.090546302 +0000 UTC m=+0.044708126 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 08:50:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:19.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:19 np0005592159 python3.9[182610]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:19.729+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:19 np0005592159 python3.9[182762]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:19.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:20 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 809 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:20 np0005592159 python3.9[182915]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:20.705+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:21.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:21 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:21 np0005592159 python3.9[183067]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:21.744+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:21.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:22 np0005592159 python3.9[183219]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:22 np0005592159 python3.9[183372]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:50:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:22.791+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:23.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:23.808+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:23 np0005592159 python3.9[183522]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:50:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:23.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:24 np0005592159 python3.9[183675]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:24.842+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:50:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:25.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:50:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:25 np0005592159 python3.9[183800]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089824.161673-1649-137362732130020/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:25.823+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:25.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:26 np0005592159 python3.9[183952]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:26 np0005592159 python3.9[184078]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089825.7541497-1649-199381904115958/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:26.854+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:50:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:27.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:50:27 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:27 np0005592159 python3.9[184230]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:27.859+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:50:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:27.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:50:28 np0005592159 python3.9[184355]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089827.0180378-1649-194623592548719/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:28 np0005592159 python3.9[184508]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:28.867+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:29.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:29 np0005592159 python3.9[184633]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089828.2613804-1649-54547709334028/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:29.838+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:30.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:30 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:30 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 814 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:30 np0005592159 python3.9[184785]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:30.882+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:50:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:31.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:50:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:31.857+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:32 np0005592159 python3.9[184911]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089829.6403964-1649-17867872157733/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:32.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:32 np0005592159 python3.9[185064]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:32.890+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:33.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:33 np0005592159 python3.9[185189]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089832.199473-1649-215269016055670/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:33 np0005592159 podman[185190]: 2026-01-22 13:50:33.333045307 +0000 UTC m=+0.075481532 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 08:50:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:33.905+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:33 np0005592159 python3.9[185365]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:34.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:34 np0005592159 python3.9[185488]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089833.413527-1649-76366723872295/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:34 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:34 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 819 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:34.877+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:35 np0005592159 python3.9[185641]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:35.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:35 np0005592159 python3.9[185766]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769089834.5563552-1649-38729854888181/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:35.871+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:36.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:36.843+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:37.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:37 np0005592159 python3.9[185919]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 22 08:50:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:37.794+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:50:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:38.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:50:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:38.799+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:38 np0005592159 python3.9[186254]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:39.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:39 np0005592159 python3.9[186406]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:39 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:39 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 824 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:39 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:50:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:39.821+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:40.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:40 np0005592159 python3.9[186558]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:40 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:50:40 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:50:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:40.852+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:40 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:40 np0005592159 python3.9[186711]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:41.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:41 np0005592159 python3.9[186863]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:41.819+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:50:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:42.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:50:42 np0005592159 python3.9[187015]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:42.785+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:43 np0005592159 python3.9[187168]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:43 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:43 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 834 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:43.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:43 np0005592159 python3.9[187320]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:43.829+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:44.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:44 np0005592159 python3.9[187472]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:44.827+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:44 np0005592159 python3.9[187625]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:45.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:45 np0005592159 python3.9[187777]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:45.805+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:46 np0005592159 python3.9[187929]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:46.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:46 np0005592159 python3.9[188082]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:46.815+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:50:47.156 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:50:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:50:47.156 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:50:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:50:47.157 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:50:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:47.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:47 np0005592159 python3.9[188234]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:47.784+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:48.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:48.780+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:49 np0005592159 python3.9[188387]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:49.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:49 np0005592159 podman[188482]: 2026-01-22 13:50:49.59918458 +0000 UTC m=+0.050221199 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 08:50:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:49.741+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:49 np0005592159 python3.9[188527]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089848.6966958-2311-144238644784835/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:50.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:50 np0005592159 python3.9[188679]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:50.761+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:50 np0005592159 python3.9[188803]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089849.9526317-2311-27249398415297/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:51.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:51 np0005592159 python3.9[188955]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:51.720+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:52 np0005592159 python3.9[189078]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089851.1138203-2311-254952131454887/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:50:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:52.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:50:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:52.707+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:52 np0005592159 python3.9[189231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:50:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:53.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:53 np0005592159 python3.9[189404]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089852.37494-2311-96388371195678/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:53.730+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:53 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 839 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:50:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:54 np0005592159 python3.9[189556]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:50:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:54.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:50:54 np0005592159 python3.9[189680]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089853.5406158-2311-9965367156643/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:54.741+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:55 np0005592159 python3.9[189832]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:55.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:55 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:55.745+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:55 np0005592159 python3.9[189955]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089854.7506-2311-115787325249875/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:50:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:56.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:50:56 np0005592159 python3.9[190107]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:56.779+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:50:56 np0005592159 python3.9[190231]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089855.9375217-2311-146330929579461/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:50:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:57.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:50:57 np0005592159 python3.9[190383]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:57.757+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:57 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:57 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:57 np0005592159 python3.9[190556]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089857.0524416-2311-90713588702022/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.107333) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858107428, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1751, "num_deletes": 252, "total_data_size": 3615608, "memory_usage": 3669544, "flush_reason": "Manual Compaction"}
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858119576, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 1454881, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15759, "largest_seqno": 17505, "table_properties": {"data_size": 1449261, "index_size": 2567, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 17004, "raw_average_key_size": 21, "raw_value_size": 1435883, "raw_average_value_size": 1831, "num_data_blocks": 112, "num_entries": 784, "num_filter_entries": 784, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089723, "oldest_key_time": 1769089723, "file_creation_time": 1769089858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 12264 microseconds, and 6068 cpu microseconds.
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.119619) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 1454881 bytes OK
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.119638) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.120891) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.120908) EVENT_LOG_v1 {"time_micros": 1769089858120903, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.120927) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 3607299, prev total WAL file size 3607299, number of live WAL files 2.
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.121831) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(1420KB)], [27(9797KB)]
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858121872, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 11487182, "oldest_snapshot_seqno": -1}
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 5351 keys, 8490136 bytes, temperature: kUnknown
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858188084, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 8490136, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8455584, "index_size": 20042, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 133878, "raw_average_key_size": 25, "raw_value_size": 8359733, "raw_average_value_size": 1562, "num_data_blocks": 828, "num_entries": 5351, "num_filter_entries": 5351, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089858, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.188409) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 8490136 bytes
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.190239) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.3 rd, 128.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.6 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(13.7) write-amplify(5.8) OK, records in: 5809, records dropped: 458 output_compression: NoCompression
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.190272) EVENT_LOG_v1 {"time_micros": 1769089858190258, "job": 14, "event": "compaction_finished", "compaction_time_micros": 66295, "compaction_time_cpu_micros": 18484, "output_level": 6, "num_output_files": 1, "total_output_size": 8490136, "num_input_records": 5809, "num_output_records": 5351, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858190934, "job": 14, "event": "table_file_deletion", "file_number": 29}
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089858194192, "job": 14, "event": "table_file_deletion", "file_number": 27}
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.121726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.194277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.194284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.194287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.194290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:50:58.194293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:50:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:50:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:50:58.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:50:58 np0005592159 python3.9[190709]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:50:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:58.720+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:50:59 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 844 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:50:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:50:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:50:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:50:59.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:50:59 np0005592159 python3.9[190832]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089858.1223714-2311-187916882300631/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:50:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:50:59.698+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:50:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:00 np0005592159 python3.9[190984]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:00.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:00.669+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:00 np0005592159 python3.9[191108]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089859.8205059-2311-49699390424987/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:01 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:01.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:01 np0005592159 python3.9[191260]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:01.646+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:02 np0005592159 python3.9[191383]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089861.0160618-2311-224093039950460/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:51:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:02.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:51:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:02.660+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:02 np0005592159 python3.9[191536]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:03 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 854 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:03.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:03 np0005592159 python3.9[191659]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089862.272363-2311-197557444458320/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:03.634+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:03 np0005592159 podman[191783]: 2026-01-22 13:51:03.928155012 +0000 UTC m=+0.081976973 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 22 08:51:04 np0005592159 python3.9[191832]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:04.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:04 np0005592159 python3.9[191962]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089863.5299273-2311-267890357109361/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:04.622+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:05 np0005592159 python3.9[192114]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:05.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:05 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:05.617+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:05 np0005592159 python3.9[192237]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089864.7334044-2311-247877618896651/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:51:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:06.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:51:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:06.594+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:07.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:07.576+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:07 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:07 np0005592159 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 08:51:08 np0005592159 python3.9[192388]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:08.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:08.600+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:09.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:09 np0005592159 python3.9[192544]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 22 08:51:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:09.583+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:10 np0005592159 auditd[699]: Audit daemon rotating log files
Jan 22 08:51:10 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:10 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 859 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:10 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:10.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:10.545+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:11.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:11.553+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:12.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:12.555+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:12 np0005592159 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 22 08:51:12 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:12 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:12 np0005592159 python3.9[192702]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:51:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:13.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:51:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:13.549+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:13 np0005592159 python3.9[192854]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:14.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:14 np0005592159 python3.9[193006]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:14.513+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:15 np0005592159 python3.9[193159]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:15.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:15.468+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:15 np0005592159 python3.9[193313]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:16.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:16.486+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:16 np0005592159 python3.9[193466]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:51:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:17.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:51:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:17.483+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:17 np0005592159 python3.9[193618]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:18.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:18.473+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:18 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:18 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 864 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:18 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:18 np0005592159 python3.9[193821]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:19.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:19.424+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:19 np0005592159 python3.9[193973]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:19 np0005592159 podman[194097]: 2026-01-22 13:51:19.943774445 +0000 UTC m=+0.066493874 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 08:51:20 np0005592159 python3.9[194144]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:20.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:20.458+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:20 np0005592159 python3.9[194297]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:21 np0005592159 systemd[1]: Reloading.
Jan 22 08:51:21 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:21 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:21.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:21 np0005592159 systemd[1]: Starting libvirt logging daemon socket...
Jan 22 08:51:21 np0005592159 systemd[1]: Listening on libvirt logging daemon socket.
Jan 22 08:51:21 np0005592159 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 22 08:51:21 np0005592159 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 22 08:51:21 np0005592159 systemd[1]: Starting libvirt logging daemon...
Jan 22 08:51:21 np0005592159 systemd[1]: Started libvirt logging daemon.
Jan 22 08:51:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:21.501+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:22 np0005592159 python3.9[194490]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:22 np0005592159 systemd[1]: Reloading.
Jan 22 08:51:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:22.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:22 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:22 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:22.545+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:22 np0005592159 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 22 08:51:22 np0005592159 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 22 08:51:22 np0005592159 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 22 08:51:22 np0005592159 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 22 08:51:22 np0005592159 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 22 08:51:22 np0005592159 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 22 08:51:22 np0005592159 systemd[1]: Starting libvirt nodedev daemon...
Jan 22 08:51:22 np0005592159 systemd[1]: Started libvirt nodedev daemon.
Jan 22 08:51:23 np0005592159 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 22 08:51:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:23.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:23 np0005592159 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 22 08:51:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:23 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:23 np0005592159 python3.9[194708]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:23 np0005592159 systemd[1]: Reloading.
Jan 22 08:51:23 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:23 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:23.573+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:23 np0005592159 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 22 08:51:23 np0005592159 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 22 08:51:23 np0005592159 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 22 08:51:23 np0005592159 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 22 08:51:23 np0005592159 systemd[1]: Starting libvirt proxy daemon...
Jan 22 08:51:23 np0005592159 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 22 08:51:23 np0005592159 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 22 08:51:23 np0005592159 systemd[1]: Started libvirt proxy daemon.
Jan 22 08:51:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:24.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:24.571+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:24 np0005592159 python3.9[194929]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:24 np0005592159 systemd[1]: Reloading.
Jan 22 08:51:24 np0005592159 setroubleshoot[194679]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 5d18beed-d68d-4a81-b559-48d1464af1ec
Jan 22 08:51:24 np0005592159 setroubleshoot[194679]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 22 08:51:24 np0005592159 setroubleshoot[194679]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 5d18beed-d68d-4a81-b559-48d1464af1ec
Jan 22 08:51:24 np0005592159 setroubleshoot[194679]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Jan 22 08:51:24 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:24 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:24 np0005592159 systemd[1]: Listening on libvirt locking daemon socket.
Jan 22 08:51:24 np0005592159 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 22 08:51:24 np0005592159 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 22 08:51:25 np0005592159 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 22 08:51:25 np0005592159 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 22 08:51:25 np0005592159 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 22 08:51:25 np0005592159 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 22 08:51:25 np0005592159 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 22 08:51:25 np0005592159 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 22 08:51:25 np0005592159 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 22 08:51:25 np0005592159 systemd[1]: Starting libvirt QEMU daemon...
Jan 22 08:51:25 np0005592159 systemd[1]: Started libvirt QEMU daemon.
Jan 22 08:51:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:25.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:25.560+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:25 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:25 np0005592159 python3.9[195145]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:51:25 np0005592159 systemd[1]: Reloading.
Jan 22 08:51:25 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:51:25 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:51:26 np0005592159 systemd[1]: Starting libvirt secret daemon socket...
Jan 22 08:51:26 np0005592159 systemd[1]: Listening on libvirt secret daemon socket.
Jan 22 08:51:26 np0005592159 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 22 08:51:26 np0005592159 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 22 08:51:26 np0005592159 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 22 08:51:26 np0005592159 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 22 08:51:26 np0005592159 systemd[1]: Starting libvirt secret daemon...
Jan 22 08:51:26 np0005592159 systemd[1]: Started libvirt secret daemon.
Jan 22 08:51:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:26.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:26.523+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:27.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:27 np0005592159 python3.9[195358]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:27.476+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:28 np0005592159 python3.9[195510]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:51:28 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:28.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:28.484+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:28 np0005592159 python3.9[195663]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:51:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:29.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:51:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:29.525+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:30 np0005592159 python3.9[195817]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:51:30 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:51:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:30.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:51:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:30.552+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:31.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:31 np0005592159 python3.9[195968]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.505690) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891505779, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 645, "num_deletes": 251, "total_data_size": 922669, "memory_usage": 935736, "flush_reason": "Manual Compaction"}
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891513087, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 605960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17510, "largest_seqno": 18150, "table_properties": {"data_size": 602925, "index_size": 943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7970, "raw_average_key_size": 19, "raw_value_size": 596493, "raw_average_value_size": 1469, "num_data_blocks": 42, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089858, "oldest_key_time": 1769089858, "file_creation_time": 1769089891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 7430 microseconds, and 3540 cpu microseconds.
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.513134) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 605960 bytes OK
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.513153) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.514557) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.514576) EVENT_LOG_v1 {"time_micros": 1769089891514570, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.514596) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 919043, prev total WAL file size 919043, number of live WAL files 2.
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.515274) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(591KB)], [30(8291KB)]
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891515364, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 9096096, "oldest_snapshot_seqno": -1}
Jan 22 08:51:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:31.561+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 5246 keys, 7419107 bytes, temperature: kUnknown
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891568167, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 7419107, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7386143, "index_size": 18774, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 132570, "raw_average_key_size": 25, "raw_value_size": 7292773, "raw_average_value_size": 1390, "num_data_blocks": 771, "num_entries": 5246, "num_filter_entries": 5246, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089891, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.568526) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7419107 bytes
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.570117) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.8 rd, 140.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 8.1 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(27.3) write-amplify(12.2) OK, records in: 5757, records dropped: 511 output_compression: NoCompression
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.570149) EVENT_LOG_v1 {"time_micros": 1769089891570131, "job": 16, "event": "compaction_finished", "compaction_time_micros": 52948, "compaction_time_cpu_micros": 22730, "output_level": 6, "num_output_files": 1, "total_output_size": 7419107, "num_input_records": 5757, "num_output_records": 5246, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891570588, "job": 16, "event": "table_file_deletion", "file_number": 32}
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089891572042, "job": 16, "event": "table_file_deletion", "file_number": 30}
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.515182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.572189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.572196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.572200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.572202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:51:31.572205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:51:31 np0005592159 python3.9[196089]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089890.7869368-3386-267387329374569/.source.xml follow=False _original_basename=secret.xml.j2 checksum=661e779e9ad9ab9796e6f7af12c5e6a2862cccb5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:32.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:32 np0005592159 python3.9[196242]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 088fe176-0106-5401-803c-2da38b73b76a#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:32.543+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:32 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:33.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:33 np0005592159 python3.9[196404]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:33.522+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:33 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 884 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:34.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:34.565+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:34 np0005592159 podman[196682]: 2026-01-22 13:51:34.712528633 +0000 UTC m=+0.094165231 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 08:51:34 np0005592159 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 22 08:51:34 np0005592159 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 22 08:51:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:35.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:35.551+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:35 np0005592159 python3.9[196895]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:36.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:36.597+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:36 np0005592159 python3.9[197048]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:37.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:37 np0005592159 python3.9[197171]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089896.1838663-3551-224981364821614/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:37.592+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:38 np0005592159 python3.9[197373]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:51:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:38.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:51:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:38.631+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:39 np0005592159 python3.9[197526]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:51:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:39.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:51:39 np0005592159 python3.9[197604]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:39.606+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:40 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 889 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:51:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:40.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:51:40 np0005592159 python3.9[197756]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:40.640+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:40 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:40 np0005592159 python3.9[197835]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.8o62472j recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:41.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:41 np0005592159 python3.9[197987]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:41.667+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:42 np0005592159 python3.9[198065]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:42.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:42.620+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:43 np0005592159 python3.9[198218]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:43.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:43.595+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:43 np0005592159 python3[198371]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 22 08:51:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:44.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:44.576+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:44 np0005592159 python3.9[198526]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:45 np0005592159 python3.9[198604]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:45.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:45.615+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:46 np0005592159 python3.9[198756]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:51:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:46.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:51:46 np0005592159 python3.9[198882]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089905.5175152-3818-260952607559155/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:46.658+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:47 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:51:47.157 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:51:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:51:47.157 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:51:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:51:47.157 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:51:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:51:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:47.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:51:47 np0005592159 python3.9[199036]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:47.677+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:47 np0005592159 python3.9[199114]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:51:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:48.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:51:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:48.630+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:48 np0005592159 python3.9[199267]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:49 np0005592159 python3.9[199345]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:49.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:49.587+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:50 np0005592159 python3.9[199497]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:50.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:50 np0005592159 podman[199595]: 2026-01-22 13:51:50.417210722 +0000 UTC m=+0.049000527 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 22 08:51:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:50.604+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:50 np0005592159 python3.9[199643]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769089909.5171597-3935-231743260845193/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:50 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 894 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:50 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:51.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:51.569+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:52 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:52 np0005592159 python3.9[199795]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:52.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:52.619+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:52 np0005592159 python3.9[199948]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:53.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:53.572+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:53 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:53 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 904 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:51:54 np0005592159 python3.9[200235]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:54 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:51:54 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:51:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:54.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:54.622+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:54 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:55 np0005592159 python3.9[200389]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:55.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:55.606+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:55 np0005592159 python3.9[200542]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:51:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:56.621+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:51:57 np0005592159 python3.9[200697]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:51:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:57.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:57.607+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:57 np0005592159 python3.9[200852]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:51:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:58.562+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:58 np0005592159 python3.9[201055]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:51:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:51:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:51:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:51:59.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:51:59 np0005592159 python3.9[201178]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089918.2983782-4151-192221711211481/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:51:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:51:59.566+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:51:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:51:59 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:51:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:51:59 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 909 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:00 np0005592159 python3.9[201330]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:52:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:52:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:00.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:52:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:00.611+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:00 np0005592159 python3.9[201454]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089919.7510564-4196-246123384437746/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:52:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:52:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:52:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:01.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:01.656+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:02.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:02.652+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:03 np0005592159 python3.9[201607]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:52:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:03.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:03 np0005592159 python3.9[201730]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089921.289507-4241-198919477861700/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:03.663+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:03 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:04.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:04.632+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:04 np0005592159 python3.9[201883]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:52:04 np0005592159 systemd[1]: Reloading.
Jan 22 08:52:04 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:04 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:05 np0005592159 systemd[1]: Reached target edpm_libvirt.target.
Jan 22 08:52:05 np0005592159 podman[201919]: 2026-01-22 13:52:05.082895578 +0000 UTC m=+0.089844713 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 08:52:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:05.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:05 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:05.598+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:06.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:06 np0005592159 python3.9[202097]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 22 08:52:06 np0005592159 systemd[1]: Reloading.
Jan 22 08:52:06 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:06 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:06.634+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:06 np0005592159 systemd[1]: Reloading.
Jan 22 08:52:06 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:06 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:52:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:07.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:52:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:07.667+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:08 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:08.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:08.711+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:09.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:09 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:09 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:09 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 914 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:09.714+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:09 np0005592159 systemd[1]: session-48.scope: Deactivated successfully.
Jan 22 08:52:09 np0005592159 systemd[1]: session-48.scope: Consumed 3min 24.178s CPU time.
Jan 22 08:52:09 np0005592159 systemd-logind[787]: Session 48 logged out. Waiting for processes to exit.
Jan 22 08:52:09 np0005592159 systemd-logind[787]: Removed session 48.
Jan 22 08:52:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:10.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:10.721+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.003000089s ======
Jan 22 08:52:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:11.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000089s
Jan 22 08:52:11 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:11.677+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:12.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:12.667+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:13 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:13.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:13.682+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 08:52:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:14.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 08:52:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:52:14 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:14 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 924 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:52:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:14.667+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:15 np0005592159 systemd-logind[787]: New session 49 of user zuul.
Jan 22 08:52:15 np0005592159 systemd[1]: Started Session 49 of User zuul.
Jan 22 08:52:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 08:52:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:15.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 08:52:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:15.640+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:16 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:16 np0005592159 python3.9[202403]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:52:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:16.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:16.633+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 08:52:17 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:17.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:17.632+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:18 np0005592159 python3.9[202558]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:52:18 np0005592159 network[202623]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:52:18 np0005592159 network[202624]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:52:18 np0005592159 network[202625]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:52:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:18.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:18.648+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.678401) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938678460, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 763, "num_deletes": 250, "total_data_size": 1334039, "memory_usage": 1355136, "flush_reason": "Manual Compaction"}
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938767039, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 868137, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18155, "largest_seqno": 18913, "table_properties": {"data_size": 864527, "index_size": 1390, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8225, "raw_average_key_size": 17, "raw_value_size": 856944, "raw_average_value_size": 1862, "num_data_blocks": 61, "num_entries": 460, "num_filter_entries": 460, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089892, "oldest_key_time": 1769089892, "file_creation_time": 1769089938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 88765 microseconds, and 5161 cpu microseconds.
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.767175) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 868137 bytes OK
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.767218) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.768219) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.768234) EVENT_LOG_v1 {"time_micros": 1769089938768230, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.768250) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 1329876, prev total WAL file size 1346271, number of live WAL files 2.
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.769034) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(847KB)], [33(7245KB)]
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938769087, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 8287244, "oldest_snapshot_seqno": -1}
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5194 keys, 7744059 bytes, temperature: kUnknown
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938978081, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 7744059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7711088, "index_size": 18909, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12997, "raw_key_size": 133612, "raw_average_key_size": 25, "raw_value_size": 7618246, "raw_average_value_size": 1466, "num_data_blocks": 757, "num_entries": 5194, "num_filter_entries": 5194, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769089938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.978391) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 7744059 bytes
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.982889) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 39.6 rd, 37.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 7.1 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(18.5) write-amplify(8.9) OK, records in: 5706, records dropped: 512 output_compression: NoCompression
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.982911) EVENT_LOG_v1 {"time_micros": 1769089938982899, "job": 18, "event": "compaction_finished", "compaction_time_micros": 209148, "compaction_time_cpu_micros": 17973, "output_level": 6, "num_output_files": 1, "total_output_size": 7744059, "num_input_records": 5706, "num_output_records": 5194, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938983202, "job": 18, "event": "table_file_deletion", "file_number": 35}
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769089938984637, "job": 18, "event": "table_file_deletion", "file_number": 33}
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.768939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.984682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.984685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.984686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.984688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:52:18.984689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:52:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:19.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:19.610+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:19 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 929 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:19 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:19 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:20.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:20.616+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:20 np0005592159 podman[202664]: 2026-01-22 13:52:20.679055548 +0000 UTC m=+0.055052051 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 08:52:21 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:21.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:21.568+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:22 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:52:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:22.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:52:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:22.617+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:23.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:23 np0005592159 python3.9[202921]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 22 08:52:23 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:23.628+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:24 np0005592159 python3.9[203005]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:52:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:24.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:24 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:24.659+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:25.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:25.616+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:25 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:26.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:26.641+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:27 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:27 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:27.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:27.645+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:28 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:28.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:28.610+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:29.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:29.607+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:29 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 934 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:29 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:30.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:30.575+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:30 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:30 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:30 np0005592159 python3.9[203162]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:52:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:31.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:31.590+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:31 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:32 np0005592159 python3.9[203314]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:52:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:32.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:32.578+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:32 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:33 np0005592159 python3.9[203468]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:52:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:33.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:33.532+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:33 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 944 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:33 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:34 np0005592159 python3.9[203620]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:52:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:52:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:34.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:52:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:34.539+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:34 np0005592159 python3.9[203774]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:52:35 np0005592159 podman[203869]: 2026-01-22 13:52:35.340838099 +0000 UTC m=+0.096989403 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 08:52:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:35.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:35 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:35 np0005592159 python3.9[203911]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089954.241237-248-18983675206636/.source.iscsi _original_basename=.zlma7wjw follow=False checksum=ac6eeee5c3166b111e4e31f108595919a1a56d1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:35.559+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:36.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:36 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:36 np0005592159 python3.9[204073]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:36.559+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:37.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:37 np0005592159 python3.9[204225]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:52:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:37.576+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:37 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:38.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:38.534+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:38 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:38 np0005592159 python3.9[204378]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:52:38 np0005592159 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 22 08:52:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:39.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:39.517+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:39 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:39 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 949 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:40.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:40.500+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:40 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:40 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:40 np0005592159 python3.9[204585]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:52:40 np0005592159 systemd[1]: Reloading.
Jan 22 08:52:40 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:40 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:41 np0005592159 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 22 08:52:41 np0005592159 systemd[1]: Starting Open-iSCSI...
Jan 22 08:52:41 np0005592159 kernel: Loading iSCSI transport class v2.0-870.
Jan 22 08:52:41 np0005592159 systemd[1]: Started Open-iSCSI.
Jan 22 08:52:41 np0005592159 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 22 08:52:41 np0005592159 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 22 08:52:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:41.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:41.451+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:41 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:42.403+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:42.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:42 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:42 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:42 np0005592159 python3.9[204785]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:52:42 np0005592159 network[204802]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:52:42 np0005592159 network[204803]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:52:42 np0005592159 network[204804]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:52:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:43.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:43.353+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:43 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:44.322+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:44.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:44 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:45.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:45.372+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:45 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:46.406+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:52:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:46.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:52:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:52:47.158 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:52:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:52:47.158 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:52:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:52:47.158 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:52:47 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:47.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:47.368+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:48.357+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:48.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:48 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:48 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 954 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:49.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:49.389+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:49 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:50 np0005592159 python3.9[205079]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:52:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:50.344+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:50.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:51 np0005592159 podman[205082]: 2026-01-22 13:52:51.004286138 +0000 UTC m=+0.067030201 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 08:52:51 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:51.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:51.384+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:52 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:52 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:52.365+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:52.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:53 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:52:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:53.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:52:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:53.405+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:53 np0005592159 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:52:53 np0005592159 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:52:53 np0005592159 systemd[1]: Reloading.
Jan 22 08:52:53 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:52:53 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:52:53 np0005592159 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:52:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:54.377+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:52:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:54.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:52:54 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:54 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:55.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:55 np0005592159 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:52:55 np0005592159 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:52:55 np0005592159 systemd[1]: run-r61bf3cef94aa40e1a45e9be128813cc7.service: Deactivated successfully.
Jan 22 08:52:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:55.408+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:55 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:56.453+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:56.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:56 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:56 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:52:57 np0005592159 python3.9[205418]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 22 08:52:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:57.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:57.421+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:57 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:58 np0005592159 python3.9[205570]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 22 08:52:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:52:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:52:58.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:52:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:58.461+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:58 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:52:58 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:52:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:52:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:52:59.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:52:59 np0005592159 python3.9[205750]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:52:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:52:59.426+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:52:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:59 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:52:59 np0005592159 python3.9[205900]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089978.8074253-514-82993534823883/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:00.447+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:00.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:01 np0005592159 python3.9[206053]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:01 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:01.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:01.399+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:02 np0005592159 python3.9[206205]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:53:02 np0005592159 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 22 08:53:02 np0005592159 systemd[1]: Stopped Load Kernel Modules.
Jan 22 08:53:02 np0005592159 systemd[1]: Stopping Load Kernel Modules...
Jan 22 08:53:02 np0005592159 systemd[1]: Starting Load Kernel Modules...
Jan 22 08:53:02 np0005592159 systemd[1]: Finished Load Kernel Modules.
Jan 22 08:53:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:02.359+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:02.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:02 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:03.320+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:03.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:03 np0005592159 python3.9[206362]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:53:03 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:03 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:04.360+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:04 np0005592159 python3.9[206515]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:53:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:53:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:04.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:53:05 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:05 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:05.357+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:05 np0005592159 python3.9[206668]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:53:05 np0005592159 podman[206763]: 2026-01-22 13:53:05.925637739 +0000 UTC m=+0.128529958 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 08:53:06 np0005592159 python3.9[206811]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769089984.8598263-664-102599571371128/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:06.328+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:06 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:06.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:07 np0005592159 python3.9[206970]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:53:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:07.306+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:07.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:07 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:07 np0005592159 python3.9[207123]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:08.314+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:08.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:08 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:08 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:09 np0005592159 python3.9[207276]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:09.355+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:09.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:09 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:09 np0005592159 python3.9[207428]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:10.394+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:10.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:10 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:11 np0005592159 python3.9[207581]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:53:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:11.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:53:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:11.431+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:11 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:11 np0005592159 python3.9[207733]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:12 np0005592159 python3.9[207885]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:12.397+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:12.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:12 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:12 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:13 np0005592159 python3.9[208038]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:13.388+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:13.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:13 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:13 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:14 np0005592159 python3.9[208215]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:53:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:14.418+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:14.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:14 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:15 np0005592159 python3.9[208476]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:53:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:15.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:15.460+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:16 np0005592159 python3.9[208629]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:16 np0005592159 systemd[1]: Listening on multipathd control socket.
Jan 22 08:53:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:16.450+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:16.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:17 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:17 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:53:17 np0005592159 python3.9[208786]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:17.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:17 np0005592159 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 22 08:53:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:17.477+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:17 np0005592159 udevadm[208791]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 22 08:53:17 np0005592159 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 22 08:53:17 np0005592159 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 22 08:53:17 np0005592159 multipathd[208794]: --------start up--------
Jan 22 08:53:17 np0005592159 multipathd[208794]: read /etc/multipath.conf
Jan 22 08:53:17 np0005592159 multipathd[208794]: path checkers start up
Jan 22 08:53:17 np0005592159 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 22 08:53:18 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:18 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:18 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:53:18 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:18.479+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:18.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:19 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:19 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:19 np0005592159 python3.9[208954]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 22 08:53:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:19.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:19.440+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:19 np0005592159 python3.9[209156]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 22 08:53:19 np0005592159 kernel: Key type psk registered
Jan 22 08:53:20 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:20.436+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:53:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:20.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:53:20 np0005592159 python3.9[209319]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:53:21 np0005592159 podman[209414]: 2026-01-22 13:53:21.273371422 +0000 UTC m=+0.096263549 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 08:53:21 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 08:53:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:21.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 08:53:21 np0005592159 python3.9[209453]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769090000.232714-1056-131343871564022/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:21.436+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:22 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:22 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:22 np0005592159 python3.9[209612]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:22.390+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:22.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:22 np0005592159 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 22 08:53:23 np0005592159 python3.9[209766]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:53:23 np0005592159 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 22 08:53:23 np0005592159 systemd[1]: Stopped Load Kernel Modules.
Jan 22 08:53:23 np0005592159 systemd[1]: Stopping Load Kernel Modules...
Jan 22 08:53:23 np0005592159 systemd[1]: Starting Load Kernel Modules...
Jan 22 08:53:23 np0005592159 systemd[1]: Finished Load Kernel Modules.
Jan 22 08:53:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:23.392+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:23.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:23 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:23 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:23 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:23 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:53:23 np0005592159 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 22 08:53:24 np0005592159 python3.9[209973]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 22 08:53:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:24.366+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:24 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:24.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:25.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:25.416+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:25 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:26.401+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:26 np0005592159 systemd[1]: Reloading.
Jan 22 08:53:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:26.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:26 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:53:26 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:53:26 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:26 np0005592159 systemd[1]: Reloading.
Jan 22 08:53:26 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:53:26 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:53:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:27 np0005592159 systemd-logind[787]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 22 08:53:27 np0005592159 systemd-logind[787]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 22 08:53:27 np0005592159 lvm[210086]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 08:53:27 np0005592159 lvm[210086]: VG ceph_vg0 finished
Jan 22 08:53:27 np0005592159 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 22 08:53:27 np0005592159 systemd[1]: Starting man-db-cache-update.service...
Jan 22 08:53:27 np0005592159 systemd[1]: Reloading.
Jan 22 08:53:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:27.380+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:27.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:27 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:53:27 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:53:27 np0005592159 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 22 08:53:27 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:28.338+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 08:53:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:28.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 08:53:28 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:28 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:29 np0005592159 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 22 08:53:29 np0005592159 systemd[1]: Finished man-db-cache-update.service.
Jan 22 08:53:29 np0005592159 systemd[1]: man-db-cache-update.service: Consumed 1.279s CPU time.
Jan 22 08:53:29 np0005592159 systemd[1]: run-rc6463b1310544d1999f86623272abec1.service: Deactivated successfully.
Jan 22 08:53:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:29.321+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:29.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:29 np0005592159 python3.9[211445]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:53:29 np0005592159 systemd[1]: Stopping Open-iSCSI...
Jan 22 08:53:29 np0005592159 iscsid[204625]: iscsid shutting down.
Jan 22 08:53:29 np0005592159 systemd[1]: iscsid.service: Deactivated successfully.
Jan 22 08:53:29 np0005592159 systemd[1]: Stopped Open-iSCSI.
Jan 22 08:53:29 np0005592159 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 22 08:53:29 np0005592159 systemd[1]: Starting Open-iSCSI...
Jan 22 08:53:29 np0005592159 systemd[1]: Started Open-iSCSI.
Jan 22 08:53:29 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:30.281+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000013s ======
Jan 22 08:53:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:30.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Jan 22 08:53:31 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:31 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:31 np0005592159 python3.9[211602]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:53:31 np0005592159 multipathd[208794]: exit (signal)
Jan 22 08:53:31 np0005592159 multipathd[208794]: --------shut down-------
Jan 22 08:53:31 np0005592159 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 22 08:53:31 np0005592159 systemd[1]: multipathd.service: Deactivated successfully.
Jan 22 08:53:31 np0005592159 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 22 08:53:31 np0005592159 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 22 08:53:31 np0005592159 multipathd[211608]: --------start up--------
Jan 22 08:53:31 np0005592159 multipathd[211608]: read /etc/multipath.conf
Jan 22 08:53:31 np0005592159 multipathd[211608]: path checkers start up
Jan 22 08:53:31 np0005592159 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 22 08:53:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:31.306+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:31.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:31 np0005592159 python3.9[211765]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 22 08:53:32 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:32 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:32.281+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000013s ======
Jan 22 08:53:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:32.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Jan 22 08:53:33 np0005592159 python3.9[211922]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:33.286+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:33.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:33 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:33 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:33 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:34.287+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:34 np0005592159 python3.9[212074]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:53:34 np0005592159 systemd[1]: Reloading.
Jan 22 08:53:34 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:53:34 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:53:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:34.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:34 np0005592159 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 22 08:53:34 np0005592159 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 22 08:53:35 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:35.255+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:35.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:35 np0005592159 python3.9[212263]: ansible-ansible.builtin.service_facts Invoked
Jan 22 08:53:35 np0005592159 network[212280]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 22 08:53:35 np0005592159 network[212281]: 'network-scripts' will be removed from distribution in near future.
Jan 22 08:53:35 np0005592159 network[212282]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 22 08:53:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:36.230+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000013s ======
Jan 22 08:53:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:36.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Jan 22 08:53:36 np0005592159 podman[212290]: 2026-01-22 13:53:36.628403081 +0000 UTC m=+0.080697862 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 22 08:53:37 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:37.273+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 08:53:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:37.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 08:53:37 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:37 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:38.276+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:38.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:38 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:38 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:39.233+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:39.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:39 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:39 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:40.274+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:40 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:40.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:40 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:41.291+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:41.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:42 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:42 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:42.339+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:42.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:42 np0005592159 python3.9[212637]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:43 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:43 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:43.372+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:43.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:43 np0005592159 python3.9[212790]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:44.361+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:44.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:44 np0005592159 python3.9[212943]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:45 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:45.340+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:45.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:46 np0005592159 python3.9[213097]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:46.348+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:46.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:53:47.159 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:53:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:53:47.159 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:53:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:53:47.160 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:53:47 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:47 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:47 np0005592159 python3.9[213251]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:47.393+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:47.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:48 np0005592159 python3.9[213404]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:48 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:48 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:48.398+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000012s ======
Jan 22 08:53:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:48.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Jan 22 08:53:48 np0005592159 python3.9[213558]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:49 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:49 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:49.396+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:49.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:49 np0005592159 python3.9[213711]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:53:50 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:50.389+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:50.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:51 np0005592159 python3.9[213865]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:51.357+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:51 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:51.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:51 np0005592159 podman[213989]: 2026-01-22 13:53:51.828220587 +0000 UTC m=+0.083849322 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 08:53:51 np0005592159 python3.9[214032]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:52.326+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:52 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:52.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:52 np0005592159 python3.9[214187]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:53 np0005592159 python3.9[214340]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:53.337+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:53 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:53 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:53.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:53 np0005592159 python3.9[214492]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:54.365+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:54 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:54.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:54 np0005592159 python3.9[214644]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:55 np0005592159 python3.9[214797]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:55.404+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:55.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:55 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:56 np0005592159 python3.9[214949]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:56.379+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:56.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:57 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:53:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:57.396+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:57.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:57 np0005592159 python3.9[215102]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:58 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:58 np0005592159 python3.9[215254]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:58.445+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:53:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:53:58.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:53:58 np0005592159 python3.9[215407]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:53:59 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:59 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:59 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:53:59 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:53:59.431+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:53:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:53:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:53:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:53:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:53:59.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:53:59 np0005592159 python3.9[215559]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:00 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:00 np0005592159 python3.9[215761]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:00.464+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:00.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:00 np0005592159 python3.9[215914]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:01 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:01.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:01.501+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:01 np0005592159 python3.9[216066]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:02 np0005592159 python3.9[216218]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:02 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:02.541+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:02.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:03 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:03.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:03.531+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:03 np0005592159 python3.9[216371]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:04 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:04 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:04.484+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:04.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:04 np0005592159 python3.9[216524]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 22 08:54:05 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:05.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:05.488+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:06 np0005592159 python3.9[216676]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:54:06 np0005592159 systemd[1]: Reloading.
Jan 22 08:54:06 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:54:06 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:54:06 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:06.475+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:06.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:07 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:07 np0005592159 podman[216812]: 2026-01-22 13:54:07.047320353 +0000 UTC m=+0.099488807 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 08:54:07 np0005592159 python3.9[216890]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:07 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:07.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:07.480+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:07 np0005592159 python3.9[217044]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:08 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:08 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1039 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:08.505+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:08.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:08 np0005592159 python3.9[217199]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:09 np0005592159 python3.9[217353]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:09 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:09.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:09.493+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:09 np0005592159 python3.9[217506]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:10 np0005592159 python3.9[217659]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:10 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:10.468+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:10.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:11 np0005592159 python3.9[217813]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:11.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:11 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:11.489+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:11 np0005592159 python3.9[217966]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 22 08:54:12 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:12.468+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:12 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:12.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.328776) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053328869, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1673, "num_deletes": 256, "total_data_size": 3218516, "memory_usage": 3275136, "flush_reason": "Manual Compaction"}
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053342402, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 2115001, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18918, "largest_seqno": 20586, "table_properties": {"data_size": 2108467, "index_size": 3414, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16221, "raw_average_key_size": 20, "raw_value_size": 2094087, "raw_average_value_size": 2620, "num_data_blocks": 150, "num_entries": 799, "num_filter_entries": 799, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769089938, "oldest_key_time": 1769089938, "file_creation_time": 1769090053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 13678 microseconds, and 5808 cpu microseconds.
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.342467) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 2115001 bytes OK
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.342485) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.343797) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.343813) EVENT_LOG_v1 {"time_micros": 1769090053343808, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.343831) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 3210617, prev total WAL file size 3210617, number of live WAL files 2.
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.344962) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353033' seq:0, type:0; will stop at (end)
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(2065KB)], [36(7562KB)]
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053345053, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 9859060, "oldest_snapshot_seqno": -1}
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 5466 keys, 9664217 bytes, temperature: kUnknown
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053417199, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 9664217, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9627902, "index_size": 21549, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13701, "raw_key_size": 141058, "raw_average_key_size": 25, "raw_value_size": 9528641, "raw_average_value_size": 1743, "num_data_blocks": 864, "num_entries": 5466, "num_filter_entries": 5466, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.417596) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 9664217 bytes
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.419118) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.4 rd, 133.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 5993, records dropped: 527 output_compression: NoCompression
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.419134) EVENT_LOG_v1 {"time_micros": 1769090053419125, "job": 20, "event": "compaction_finished", "compaction_time_micros": 72306, "compaction_time_cpu_micros": 25884, "output_level": 6, "num_output_files": 1, "total_output_size": 9664217, "num_input_records": 5993, "num_output_records": 5466, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053419570, "job": 20, "event": "table_file_deletion", "file_number": 38}
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090053420724, "job": 20, "event": "table_file_deletion", "file_number": 36}
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.344860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.420852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.420858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.420862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.420864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:13.420866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:13.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:13.463+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:13 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:14.487+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:14 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:14.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:14 np0005592159 python3.9[218121]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:15.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:15 np0005592159 python3.9[218273]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:15.509+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:15 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:16 np0005592159 python3.9[218425]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:16.473+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:16 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:16.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:17 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:17 np0005592159 python3.9[218578]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:17.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:17.513+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:17 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:18 np0005592159 python3.9[218730]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:18.529+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:18.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:18 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:18 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1049 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:19 np0005592159 python3.9[218883]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:19.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:19.531+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:19 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:19 np0005592159 python3.9[219035]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:20 np0005592159 python3.9[219237]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:20.516+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:20.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:20 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:21 np0005592159 python3.9[219390]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:21.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:21.497+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:21 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:21 np0005592159 python3.9[219542]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:21 np0005592159 podman[219567]: 2026-01-22 13:54:21.984905054 +0000 UTC m=+0.050143880 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 08:54:22 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:22.538+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:22.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:22 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:23.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:23.492+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:23 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:23 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1054 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:24.451+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:24.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:24 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:24 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:24 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:25.474+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:25.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:25 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:25 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 08:54:25 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:25 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 08:54:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:26.466+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:26.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:26 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:27.454+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:27.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:27 np0005592159 python3.9[219848]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 22 08:54:27 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:28.430+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:28.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:28 np0005592159 python3.9[220002]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 22 08:54:28 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:28 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:29.447+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:29.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:29 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:30 np0005592159 python3.9[220160]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 22 08:54:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:30.397+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:30.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:30 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:31.373+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:31.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:31 np0005592159 systemd-logind[787]: New session 50 of user zuul.
Jan 22 08:54:31 np0005592159 systemd[1]: Started Session 50 of User zuul.
Jan 22 08:54:31 np0005592159 systemd[1]: session-50.scope: Deactivated successfully.
Jan 22 08:54:31 np0005592159 systemd-logind[787]: Session 50 logged out. Waiting for processes to exit.
Jan 22 08:54:31 np0005592159 systemd-logind[787]: Removed session 50.
Jan 22 08:54:31 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:32 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:32.333+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:32 np0005592159 python3.9[220347]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000055s ======
Jan 22 08:54:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:32.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.925268) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072925560, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 548, "num_deletes": 251, "total_data_size": 649547, "memory_usage": 660696, "flush_reason": "Manual Compaction"}
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072933171, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 415944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20591, "largest_seqno": 21134, "table_properties": {"data_size": 413181, "index_size": 735, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7315, "raw_average_key_size": 19, "raw_value_size": 407344, "raw_average_value_size": 1092, "num_data_blocks": 33, "num_entries": 373, "num_filter_entries": 373, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090053, "oldest_key_time": 1769090053, "file_creation_time": 1769090072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 8175 microseconds, and 2052 cpu microseconds.
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.933448) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 415944 bytes OK
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.933560) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.935276) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.935295) EVENT_LOG_v1 {"time_micros": 1769090072935289, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.935328) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 646312, prev total WAL file size 646312, number of live WAL files 2.
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.936518) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(406KB)], [39(9437KB)]
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072936568, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 10080161, "oldest_snapshot_seqno": -1}
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5324 keys, 8372170 bytes, temperature: kUnknown
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072994113, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 8372170, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8337808, "index_size": 19980, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 138911, "raw_average_key_size": 26, "raw_value_size": 8241822, "raw_average_value_size": 1548, "num_data_blocks": 796, "num_entries": 5324, "num_filter_entries": 5324, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090072, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.994338) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8372170 bytes
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.995984) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.0 rd, 145.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.2 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(44.4) write-amplify(20.1) OK, records in: 5839, records dropped: 515 output_compression: NoCompression
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.996004) EVENT_LOG_v1 {"time_micros": 1769090072995996, "job": 22, "event": "compaction_finished", "compaction_time_micros": 57603, "compaction_time_cpu_micros": 19582, "output_level": 6, "num_output_files": 1, "total_output_size": 8372170, "num_input_records": 5839, "num_output_records": 5324, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072996201, "job": 22, "event": "table_file_deletion", "file_number": 41}
Jan 22 08:54:32 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 08:54:33 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090072997825, "job": 22, "event": "table_file_deletion", "file_number": 39}
Jan 22 08:54:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.936422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.997918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.997923) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.997925) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.997926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-13:54:32.997927) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 08:54:33 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 08:54:33 np0005592159 python3.9[220469]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090072.136436-2661-70723950249857/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:33.300+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:33 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:33.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:33 np0005592159 python3.9[220669]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:34 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:34 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:34 np0005592159 python3.9[220745]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:34.251+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:34 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:34.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:34 np0005592159 python3.9[220896]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:35 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:35.243+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:35 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:35 np0005592159 python3.9[221017]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090074.33942-2661-193013022464288/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:35.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:36 np0005592159 python3.9[221167]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:36 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:36.278+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:36 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:36 np0005592159 python3.9[221288]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090075.4237041-2661-169273789506235/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=d01cc1b48d783e4ed08d12bb4d0a107aba230a69 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:36.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:37 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:37 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:37 np0005592159 python3.9[221439]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:37.322+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:37 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:37.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:37 np0005592159 podman[221534]: 2026-01-22 13:54:37.55688509 +0000 UTC m=+0.077526195 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 08:54:37 np0005592159 python3.9[221573]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090076.6936722-2661-186240561726968/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:38 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:38.284+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:38 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:38 np0005592159 python3.9[221736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:38.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:38 np0005592159 python3.9[221858]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090077.8564768-2661-166398153096849/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:39 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:39 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:39.255+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:39 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:39.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:40 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:40 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:40.300+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:40 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:40 np0005592159 python3.9[222061]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:40.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:41 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:41.319+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:41 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:41.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:41 np0005592159 python3.9[222213]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:54:42 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:42 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:42.317+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:42 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:42 np0005592159 python3.9[222365]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:54:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:42.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:43 np0005592159 python3.9[222518]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:43 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:43.295+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:43 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:43.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:43 np0005592159 python3.9[222641]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769090082.686153-2985-69131524682927/.source _original_basename=.yenrcdsu follow=False checksum=d73f8e53f15f2892abac02b728024fce172554d8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 22 08:54:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:44.306+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:44 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:44.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:44 np0005592159 python3.9[222794]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:54:45 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:45 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:45.297+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:45 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:45.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:45 np0005592159 python3.9[222946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:46 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:46 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:46.248+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:46 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:46 np0005592159 python3.9[223067]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090085.2893736-3062-176971979301543/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:46.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:54:47.160 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:54:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:54:47.161 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:54:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 13:54:47.161 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:54:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:47.199+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:47 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:47 np0005592159 python3.9[223218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 22 08:54:47 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:47.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:47 np0005592159 python3.9[223339]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769090086.5943604-3106-129330245165084/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 22 08:54:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:48.153+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:48 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:48 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:48 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:48 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:48.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:48 np0005592159 python3.9[223492]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 22 08:54:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:49.132+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:49 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:49.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:49 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:50 np0005592159 python3.9[223644]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 08:54:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:50.171+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:50 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:50 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:50.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:51.145+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:51 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:51 np0005592159 python3[223797]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 08:54:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:51.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:51 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:52.159+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:52 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:52.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:52 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:53 np0005592159 podman[223833]: 2026-01-22 13:54:53.030927675 +0000 UTC m=+0.086708115 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 08:54:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:53.143+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:53 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:53.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:54 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:54 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:54:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:54.140+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:54 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:54:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:54.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:54:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:55.130+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:55 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:55 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:55.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:56.144+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:56 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:56.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:57.122+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:57 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:57 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:54:57 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:57 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:57.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:58.170+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:58 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:54:58.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:54:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:54:59.201+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:59 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:54:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:54:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:54:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:54:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:54:59.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:00.191+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:00 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:55:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:00.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:55:00 np0005592159 podman[223810]: 2026-01-22 13:55:00.955728188 +0000 UTC m=+9.534893691 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 08:55:01 np0005592159 podman[223959]: 2026-01-22 13:55:01.119549434 +0000 UTC m=+0.049928308 container create 384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 08:55:01 np0005592159 podman[223959]: 2026-01-22 13:55:01.089882082 +0000 UTC m=+0.020260976 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 08:55:01 np0005592159 python3[223797]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 22 08:55:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:01.202+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:01 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:01.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:02.190+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:02 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:02.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:03.177+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:03 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:03.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:03 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:03 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:03 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:03 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:03 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:04.180+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:04 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:04 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:04 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:04 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:04 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:04 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:04.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:04 np0005592159 python3.9[224150]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:05.180+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:05 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:05.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:05 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:06.154+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:06 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:06 np0005592159 python3.9[224304]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 22 08:55:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:06.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:06 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:07 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:07.192+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:07 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:07 np0005592159 python3.9[224457]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 22 08:55:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:07.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:08 np0005592159 podman[224538]: 2026-01-22 13:55:08.026277394 +0000 UTC m=+0.086773798 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:55:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:08.159+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:08 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:08 np0005592159 python3[224636]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 22 08:55:08 np0005592159 podman[224674]: 2026-01-22 13:55:08.589136008 +0000 UTC m=+0.060378474 container create 572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 08:55:08 np0005592159 podman[224674]: 2026-01-22 13:55:08.558432607 +0000 UTC m=+0.029675083 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 22 08:55:08 np0005592159 python3[224636]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 22 08:55:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:08.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:08 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:08 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:09.174+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:09 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:09.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:09 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:09 np0005592159 python3.9[224864]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:10.189+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:10 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:10.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:10 np0005592159 python3.9[225019]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:55:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:11.236+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:11 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:11 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:11.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:11 np0005592159 python3.9[225170]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769090110.9074886-3394-195255922898696/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 22 08:55:12 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:12.209+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:12 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:12 np0005592159 python3.9[225246]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 22 08:55:12 np0005592159 systemd[1]: Reloading.
Jan 22 08:55:12 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:55:12 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:55:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:12.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:13 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:13.174+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:13 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:13 np0005592159 python3.9[225358]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 22 08:55:13 np0005592159 systemd[1]: Reloading.
Jan 22 08:55:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:13.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:13 np0005592159 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 22 08:55:13 np0005592159 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 22 08:55:13 np0005592159 systemd[1]: Starting nova_compute container...
Jan 22 08:55:13 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:55:13 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:13 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:13 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:13 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:13 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:14 np0005592159 podman[225398]: 2026-01-22 13:55:14.012823101 +0000 UTC m=+0.160297581 container init 572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute)
Jan 22 08:55:14 np0005592159 podman[225398]: 2026-01-22 13:55:14.025589381 +0000 UTC m=+0.173063881 container start 572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, managed_by=edpm_ansible)
Jan 22 08:55:14 np0005592159 podman[225398]: nova_compute
Jan 22 08:55:14 np0005592159 nova_compute[225413]: + sudo -E kolla_set_configs
Jan 22 08:55:14 np0005592159 systemd[1]: Started nova_compute container.
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Validating config file
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Copying service configuration files
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Deleting /etc/ceph
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Creating directory /etc/ceph
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /etc/ceph
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Writing out command to execute
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:14 np0005592159 nova_compute[225413]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 08:55:14 np0005592159 nova_compute[225413]: ++ cat /run_command
Jan 22 08:55:14 np0005592159 nova_compute[225413]: + CMD=nova-compute
Jan 22 08:55:14 np0005592159 nova_compute[225413]: + ARGS=
Jan 22 08:55:14 np0005592159 nova_compute[225413]: + sudo kolla_copy_cacerts
Jan 22 08:55:14 np0005592159 nova_compute[225413]: + [[ ! -n '' ]]
Jan 22 08:55:14 np0005592159 nova_compute[225413]: + . kolla_extend_start
Jan 22 08:55:14 np0005592159 nova_compute[225413]: Running command: 'nova-compute'
Jan 22 08:55:14 np0005592159 nova_compute[225413]: + echo 'Running command: '\''nova-compute'\'''
Jan 22 08:55:14 np0005592159 nova_compute[225413]: + umask 0022
Jan 22 08:55:14 np0005592159 nova_compute[225413]: + exec nova-compute
Jan 22 08:55:14 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:14 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:14 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:14.192+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:14 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:14.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:15.143+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:15 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:15 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:15 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:15 np0005592159 python3.9[225576]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:15.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:16.094+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:16 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:16 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:16 np0005592159 nova_compute[225413]: 2026-01-22 13:55:16.258 225417 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:16 np0005592159 nova_compute[225413]: 2026-01-22 13:55:16.258 225417 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:16 np0005592159 nova_compute[225413]: 2026-01-22 13:55:16.259 225417 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:16 np0005592159 nova_compute[225413]: 2026-01-22 13:55:16.259 225417 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 22 08:55:16 np0005592159 python3.9[225728]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:16 np0005592159 nova_compute[225413]: 2026-01-22 13:55:16.408 225417 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 08:55:16 np0005592159 nova_compute[225413]: 2026-01-22 13:55:16.439 225417 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 08:55:16 np0005592159 nova_compute[225413]: 2026-01-22 13:55:16.440 225417 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 22 08:55:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:16.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:16 np0005592159 nova_compute[225413]: 2026-01-22 13:55:16.985 225417 INFO nova.virt.driver [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 22 08:55:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:17.087+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:17 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.113 225417 INFO nova.compute.provider_config [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 22 08:55:17 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.134 225417 DEBUG oslo_concurrency.lockutils [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.135 225417 DEBUG oslo_concurrency.lockutils [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.135 225417 DEBUG oslo_concurrency.lockutils [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.135 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.136 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.136 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.136 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.136 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.136 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.137 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.137 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.137 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.137 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.137 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.138 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.138 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.138 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.138 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.138 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.139 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.139 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.139 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.139 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.139 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.140 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.140 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.140 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.140 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.140 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.141 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.141 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.141 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.141 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.141 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.142 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.142 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.142 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.142 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.142 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.142 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.143 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.143 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.143 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.143 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.144 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.144 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.144 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.144 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.144 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.145 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.145 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.145 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.145 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.145 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.146 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.146 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.146 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.146 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.146 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.147 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.147 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.147 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.147 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.147 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.148 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.148 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.148 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.148 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.148 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.148 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.149 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.149 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.149 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.149 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.149 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.150 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.150 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.150 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.150 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.150 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.151 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.151 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.151 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.151 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.151 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.151 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.152 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.152 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.152 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.152 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.152 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.153 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.153 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.153 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.153 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.153 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.154 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.154 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.154 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.154 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.154 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.154 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.155 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.155 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.155 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.155 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.155 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.156 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.156 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.156 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.156 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.156 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.157 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.157 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.157 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.157 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.157 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.157 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.158 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.158 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.158 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.158 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.158 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.159 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.159 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.159 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.159 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.159 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.159 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.160 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.160 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.160 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.160 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.160 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.161 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.161 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.161 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.161 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.161 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.161 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.162 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.162 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.162 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.162 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.162 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.163 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.163 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.163 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.163 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.163 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.164 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.164 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.164 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.164 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.164 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.165 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.165 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.165 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.165 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.166 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.166 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.166 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.166 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.166 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.167 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.167 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.167 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.167 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.167 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.168 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.168 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.168 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.168 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.168 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.168 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.169 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.169 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.169 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.169 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.169 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.170 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.170 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.170 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.170 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.170 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.171 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.171 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.171 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.171 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.171 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.171 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.172 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.172 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.172 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.172 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.172 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.173 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.173 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.173 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.173 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.173 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.174 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.174 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.174 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.174 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.174 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.175 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.175 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.175 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.175 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.175 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.176 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.176 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.176 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.176 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.176 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.177 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.177 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.177 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.177 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.177 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.178 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.178 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.178 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.178 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.178 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.179 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.179 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.179 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.180 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.180 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.180 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.180 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.180 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.181 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.181 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.181 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.181 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.181 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.182 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.182 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.182 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.182 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.182 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.183 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.183 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.183 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.183 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.183 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.184 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.184 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.184 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.184 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.184 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.185 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.185 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.185 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.185 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.185 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.186 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.186 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.186 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.186 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.186 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.187 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.187 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.187 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.187 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.187 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.187 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.188 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.188 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.188 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.188 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.188 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.189 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.189 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.189 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.189 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.189 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.190 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.190 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.190 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.190 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.191 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.191 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.191 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.191 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.192 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.192 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.192 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.192 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.192 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.193 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.193 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.193 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.193 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.194 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.194 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.194 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.194 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.194 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.195 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.195 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.195 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.195 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.195 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.195 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.196 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.196 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.196 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.196 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.196 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.197 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.197 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.197 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.197 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.197 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.198 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.198 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.198 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.198 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.198 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.198 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.199 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.199 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.199 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.199 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.200 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.200 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.200 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.200 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.200 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.201 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.201 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.201 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.201 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.201 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.201 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.202 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.202 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.202 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.202 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.202 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.203 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.203 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.203 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.203 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.203 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.203 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.204 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.204 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.204 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.204 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.204 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.205 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.205 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.205 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.205 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.205 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.206 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.206 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.206 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.206 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.207 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.207 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.207 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.207 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.207 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.207 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.208 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.208 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.208 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.208 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.208 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.209 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.209 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.209 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.209 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.209 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.210 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.210 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.210 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.210 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.210 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.211 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.211 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.211 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.211 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.211 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.212 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.212 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.212 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.212 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.213 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.213 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.213 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.213 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.213 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.214 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.214 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.214 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.214 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.214 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.215 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.215 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.215 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.215 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.215 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.216 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.216 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.216 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.216 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.216 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.217 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.217 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.217 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.217 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.217 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.218 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.218 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.218 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.218 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.219 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.219 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.219 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.219 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.219 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.220 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.220 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.220 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.220 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.220 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.220 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.221 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.221 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.221 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.221 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.221 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.222 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.222 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.222 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.222 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.222 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.222 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.223 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.223 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.223 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.223 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.223 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.224 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.224 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.224 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.224 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.224 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.225 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.225 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.225 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.225 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.225 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.226 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.226 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.226 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.226 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.226 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.227 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.227 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.227 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.227 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.227 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.228 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.228 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.228 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.228 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.229 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.229 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.229 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.229 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.229 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.230 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.230 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.230 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.230 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.230 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.231 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.231 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.231 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.231 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.231 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.232 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.232 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.232 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.232 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.232 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.233 225417 WARNING oslo_config.cfg [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 22 08:55:17 np0005592159 nova_compute[225413]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 22 08:55:17 np0005592159 nova_compute[225413]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 22 08:55:17 np0005592159 nova_compute[225413]: and ``live_migration_inbound_addr`` respectively.
Jan 22 08:55:17 np0005592159 nova_compute[225413]: ).  Its value may be silently ignored in the future.#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.233 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.233 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.233 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.234 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.234 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.234 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.234 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.234 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.235 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.235 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.235 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.235 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.235 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.236 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.236 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.236 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.236 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.236 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.237 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rbd_secret_uuid        = 088fe176-0106-5401-803c-2da38b73b76a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.237 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.237 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.237 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.237 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.238 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.238 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.238 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.238 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.238 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.239 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.239 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.239 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.239 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.240 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.240 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.240 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.240 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.240 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.241 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.241 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.241 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.241 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.241 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.242 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.242 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.242 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.242 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.242 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.243 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.243 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.243 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.243 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.244 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.244 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.244 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.244 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.244 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.244 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.245 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.245 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.245 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.245 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.245 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.246 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.246 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.246 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.246 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.246 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.247 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.247 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.247 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.247 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.248 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.248 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.248 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.248 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.248 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.249 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.249 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.249 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.249 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.249 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.250 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.250 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.250 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.250 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.250 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.251 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.251 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.251 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.251 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.251 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.252 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.252 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.252 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.252 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.252 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.253 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.253 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.253 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.253 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.253 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.254 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.254 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.254 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.254 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.254 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.254 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.255 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.255 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.255 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.255 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.255 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.256 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.256 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.256 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.256 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.256 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.256 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.257 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.257 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.257 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.257 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.257 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.258 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.258 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.258 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.258 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.258 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.259 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.259 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.259 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.259 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.259 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.259 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.260 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.260 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.260 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.260 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.260 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.261 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.261 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.261 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.261 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.261 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.262 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.262 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.262 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.262 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.262 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.263 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.263 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.263 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.263 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.263 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.264 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.264 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.264 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.264 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.264 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.264 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.265 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.265 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.265 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.265 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.265 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.266 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.266 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.266 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.266 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.266 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.267 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.267 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.267 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.267 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.267 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.268 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.268 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.268 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.268 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.268 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.269 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.269 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.269 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.269 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.269 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.270 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.270 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.270 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.270 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.270 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.271 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.271 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.271 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.271 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.271 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.271 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.272 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.272 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.272 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.272 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.273 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.273 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.273 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.273 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.273 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.273 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.274 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.274 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.274 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.274 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.274 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.275 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.275 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.275 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.275 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.275 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.276 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.276 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.276 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.276 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.276 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.276 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.277 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.277 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.277 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.277 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.277 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.278 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.278 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.278 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.278 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.278 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.278 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.279 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.279 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.279 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.279 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.279 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.280 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.280 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.280 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.280 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.280 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.280 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.281 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.281 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.281 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.281 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.282 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.282 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.282 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.282 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.282 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.283 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.283 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.283 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.283 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.283 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.284 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.284 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.284 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.284 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.284 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.285 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.285 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.285 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.285 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.285 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.285 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.286 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.286 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.286 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.286 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.287 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.287 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.287 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.287 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.287 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.288 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.288 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.288 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.288 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.288 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.288 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.289 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.289 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.289 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.289 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.289 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.290 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.290 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.290 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.290 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.290 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.291 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.291 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.291 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.291 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.291 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.292 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.292 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.292 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.292 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.292 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.293 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.293 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.293 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.293 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.293 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.294 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.294 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.294 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.294 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.294 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.295 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.295 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.295 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.295 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.295 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 python3.9[225881]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.296 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.296 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.296 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.297 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.297 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.297 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.297 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.297 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.298 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.298 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.298 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.298 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.298 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.299 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.299 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.299 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.299 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.299 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.300 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.300 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.300 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.300 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.300 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.301 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.301 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.301 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.301 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.301 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.302 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.302 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.302 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.302 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.302 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.303 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.303 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.303 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.303 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.303 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.303 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.304 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.304 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.304 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.304 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.304 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.304 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.305 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.305 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.305 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.305 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.305 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.306 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.306 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.306 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.306 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.306 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.307 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.307 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.307 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.307 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.307 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.307 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.308 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.308 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.308 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.308 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.308 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.309 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.309 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.309 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.309 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.309 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.309 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.310 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.310 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.310 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.310 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.310 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.311 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.311 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.311 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.311 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.311 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.312 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.312 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.312 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.312 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.312 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.313 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.313 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.313 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.313 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.313 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.313 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.314 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.314 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.314 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.314 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.314 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.315 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.315 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.315 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.315 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.315 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.316 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.316 225417 DEBUG oslo_service.service [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.317 225417 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.330 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.330 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.331 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.331 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 22 08:55:17 np0005592159 systemd[1]: Starting libvirt QEMU daemon...
Jan 22 08:55:17 np0005592159 systemd[1]: Started libvirt QEMU daemon.
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.405 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fb57bf492b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.407 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fb57bf492b0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.408 225417 INFO nova.virt.libvirt.driver [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.435 225417 WARNING nova.virt.libvirt.driver [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Cannot update service status on host "compute-2.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Jan 22 08:55:17 np0005592159 nova_compute[225413]: 2026-01-22 13:55:17.436 225417 DEBUG nova.virt.libvirt.volume.mount [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 22 08:55:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:17.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:18.094+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:18 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:18 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.319 225417 INFO nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Libvirt host capabilities <capabilities>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <host>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <uuid>5492a354-d192-4c48-8602-99be1884b049</uuid>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <cpu>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <arch>x86_64</arch>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <microcode version='16777317'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <signature family='23' model='49' stepping='0'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='x2apic'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='tsc-deadline'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='osxsave'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='hypervisor'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='tsc_adjust'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='spec-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='stibp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='arch-capabilities'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='ssbd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='cmp_legacy'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='topoext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='virt-ssbd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='lbrv'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='tsc-scale'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='vmcb-clean'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='pause-filter'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='pfthreshold'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='rdctl-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='skip-l1dfl-vmentry'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='mds-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature name='pschange-mc-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <pages unit='KiB' size='4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <pages unit='KiB' size='2048'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <pages unit='KiB' size='1048576'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </cpu>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <power_management>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <suspend_mem/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </power_management>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <iommu support='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <migration_features>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <live/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <uri_transports>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <uri_transport>tcp</uri_transport>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <uri_transport>rdma</uri_transport>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </uri_transports>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </migration_features>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <topology>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <cells num='1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <cell id='0'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:          <memory unit='KiB'>7864312</memory>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:          <pages unit='KiB' size='4'>1966078</pages>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:          <pages unit='KiB' size='2048'>0</pages>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:          <distances>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:            <sibling id='0' value='10'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:          </distances>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:          <cpus num='8'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:          </cpus>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        </cell>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </cells>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </topology>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <cache>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </cache>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <secmodel>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model>selinux</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <doi>0</doi>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </secmodel>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <secmodel>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model>dac</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <doi>0</doi>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </secmodel>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </host>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <guest>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <os_type>hvm</os_type>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <arch name='i686'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <wordsize>32</wordsize>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <domain type='qemu'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <domain type='kvm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </arch>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <features>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <pae/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <nonpae/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <acpi default='on' toggle='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <apic default='on' toggle='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <cpuselection/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <deviceboot/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <disksnapshot default='on' toggle='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <externalSnapshot/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </features>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </guest>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <guest>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <os_type>hvm</os_type>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <arch name='x86_64'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <wordsize>64</wordsize>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <domain type='qemu'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <domain type='kvm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </arch>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <features>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <acpi default='on' toggle='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <apic default='on' toggle='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <cpuselection/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <deviceboot/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <disksnapshot default='on' toggle='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <externalSnapshot/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </features>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </guest>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 
Jan 22 08:55:18 np0005592159 nova_compute[225413]: </capabilities>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: #033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.326 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.341 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 22 08:55:18 np0005592159 nova_compute[225413]: <domainCapabilities>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <domain>kvm</domain>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <arch>i686</arch>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <vcpu max='240'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <iothreads supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <os supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <enum name='firmware'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <loader supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>rom</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pflash</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='readonly'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>yes</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>no</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='secure'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>no</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </loader>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </os>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <cpu>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>on</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>off</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='maximumMigratable'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>on</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>off</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='succor'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='custom' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cooperlake'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='KnightsMill'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='athlon'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='athlon-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='core2duo'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='core2duo-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='coreduo'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='coreduo-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='n270'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='n270-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='phenom'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='phenom-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </cpu>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <memoryBacking supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <enum name='sourceType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>file</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>anonymous</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>memfd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </memoryBacking>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <devices>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <disk supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='diskDevice'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>disk</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>cdrom</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>floppy</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>lun</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='bus'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>ide</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>fdc</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>scsi</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>usb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>sata</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </disk>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <graphics supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vnc</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>egl-headless</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>dbus</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </graphics>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <video supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='modelType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vga</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>cirrus</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>none</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>bochs</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>ramfb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </video>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <hostdev supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='mode'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>subsystem</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='startupPolicy'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>default</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>mandatory</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>requisite</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>optional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='subsysType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>usb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pci</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>scsi</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='capsType'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='pciBackend'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </hostdev>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <rng supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>random</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>egd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>builtin</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </rng>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <filesystem supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='driverType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>path</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>handle</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtiofs</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </filesystem>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <tpm supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tpm-tis</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tpm-crb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>emulator</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>external</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendVersion'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>2.0</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </tpm>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <redirdev supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='bus'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>usb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </redirdev>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <channel supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pty</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>unix</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </channel>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <crypto supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>qemu</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>builtin</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </crypto>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <interface supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>default</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>passt</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </interface>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <panic supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>isa</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>hyperv</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </panic>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <console supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>null</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vc</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pty</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>dev</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>file</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pipe</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>stdio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>udp</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tcp</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>unix</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>qemu-vdagent</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>dbus</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </console>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </devices>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <features>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <gic supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <genid supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <backup supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <async-teardown supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <s390-pv supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <ps2 supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <tdx supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <sev supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <sgx supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <hyperv supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='features'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>relaxed</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vapic</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>spinlocks</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vpindex</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>runtime</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>synic</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>stimer</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>reset</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vendor_id</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>frequencies</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>reenlightenment</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tlbflush</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>ipi</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>avic</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>emsr_bitmap</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>xmm_input</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <defaults>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </defaults>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </hyperv>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <launchSecurity supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </features>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: </domainCapabilities>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.350 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 22 08:55:18 np0005592159 nova_compute[225413]: <domainCapabilities>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <domain>kvm</domain>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <arch>i686</arch>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <vcpu max='4096'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <iothreads supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <os supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <enum name='firmware'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <loader supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>rom</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pflash</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='readonly'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>yes</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>no</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='secure'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>no</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </loader>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </os>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <cpu>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>on</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>off</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='maximumMigratable'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>on</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>off</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='succor'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='custom' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cooperlake'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='KnightsMill'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='athlon'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='athlon-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='core2duo'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='core2duo-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='coreduo'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='coreduo-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='n270'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='n270-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='phenom'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='phenom-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </cpu>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <memoryBacking supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <enum name='sourceType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>file</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>anonymous</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>memfd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </memoryBacking>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <devices>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <disk supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='diskDevice'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>disk</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>cdrom</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>floppy</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>lun</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='bus'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>fdc</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>scsi</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>usb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>sata</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </disk>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <graphics supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vnc</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>egl-headless</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>dbus</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </graphics>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <video supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='modelType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vga</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>cirrus</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>none</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>bochs</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>ramfb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </video>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <hostdev supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='mode'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>subsystem</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='startupPolicy'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>default</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>mandatory</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>requisite</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>optional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='subsysType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>usb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pci</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>scsi</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='capsType'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='pciBackend'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </hostdev>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <rng supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>random</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>egd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>builtin</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </rng>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <filesystem supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='driverType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>path</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>handle</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtiofs</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </filesystem>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <tpm supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tpm-tis</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tpm-crb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>emulator</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>external</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendVersion'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>2.0</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </tpm>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <redirdev supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='bus'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>usb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </redirdev>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <channel supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pty</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>unix</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </channel>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <crypto supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>qemu</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>builtin</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </crypto>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <interface supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>default</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>passt</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </interface>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <panic supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>isa</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>hyperv</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </panic>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <console supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>null</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vc</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pty</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>dev</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>file</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pipe</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>stdio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>udp</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tcp</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>unix</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>qemu-vdagent</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>dbus</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </console>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </devices>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <features>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <gic supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <genid supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <backup supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <async-teardown supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <s390-pv supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <ps2 supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <tdx supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <sev supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <sgx supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <hyperv supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='features'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>relaxed</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vapic</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>spinlocks</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vpindex</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>runtime</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>synic</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>stimer</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>reset</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vendor_id</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>frequencies</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>reenlightenment</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tlbflush</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>ipi</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>avic</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>emsr_bitmap</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>xmm_input</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <defaults>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </defaults>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </hyperv>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <launchSecurity supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </features>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: </domainCapabilities>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.405 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.410 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 22 08:55:18 np0005592159 nova_compute[225413]: <domainCapabilities>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <domain>kvm</domain>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <arch>x86_64</arch>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <vcpu max='240'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <iothreads supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <os supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <enum name='firmware'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <loader supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>rom</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pflash</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='readonly'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>yes</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>no</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='secure'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>no</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </loader>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </os>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <cpu>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>on</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>off</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='maximumMigratable'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>on</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>off</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='succor'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='custom' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cooperlake'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='KnightsMill'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 python3.9[226093]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='athlon'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='athlon-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='core2duo'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='core2duo-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='coreduo'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='coreduo-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='n270'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='n270-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='phenom'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='phenom-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </cpu>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <memoryBacking supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <enum name='sourceType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>file</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>anonymous</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>memfd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </memoryBacking>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <devices>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <disk supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='diskDevice'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>disk</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>cdrom</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>floppy</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>lun</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='bus'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>ide</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>fdc</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>scsi</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>usb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>sata</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </disk>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <graphics supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vnc</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>egl-headless</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>dbus</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </graphics>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <video supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='modelType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vga</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>cirrus</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>none</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>bochs</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>ramfb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </video>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <hostdev supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='mode'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>subsystem</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='startupPolicy'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>default</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>mandatory</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>requisite</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>optional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='subsysType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>usb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pci</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>scsi</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='capsType'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='pciBackend'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </hostdev>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <rng supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>random</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>egd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>builtin</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </rng>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <filesystem supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='driverType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>path</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>handle</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtiofs</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </filesystem>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <tpm supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tpm-tis</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tpm-crb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>emulator</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>external</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendVersion'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>2.0</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </tpm>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <redirdev supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='bus'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>usb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </redirdev>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <channel supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pty</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>unix</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </channel>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <crypto supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>qemu</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>builtin</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </crypto>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <interface supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>default</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>passt</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </interface>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <panic supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>isa</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>hyperv</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </panic>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <console supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>null</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vc</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pty</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>dev</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>file</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pipe</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>stdio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>udp</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tcp</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>unix</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>qemu-vdagent</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>dbus</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </console>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </devices>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <features>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <gic supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <genid supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <backup supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <async-teardown supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <s390-pv supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <ps2 supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <tdx supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <sev supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <sgx supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <hyperv supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='features'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>relaxed</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vapic</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>spinlocks</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vpindex</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>runtime</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>synic</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>stimer</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>reset</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vendor_id</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>frequencies</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>reenlightenment</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tlbflush</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>ipi</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>avic</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>emsr_bitmap</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>xmm_input</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <defaults>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </defaults>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </hyperv>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <launchSecurity supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </features>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: </domainCapabilities>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.489 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 22 08:55:18 np0005592159 nova_compute[225413]: <domainCapabilities>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <domain>kvm</domain>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <arch>x86_64</arch>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <vcpu max='4096'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <iothreads supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <os supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <enum name='firmware'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>efi</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <loader supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>rom</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pflash</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='readonly'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>yes</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>no</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='secure'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>yes</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>no</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </loader>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </os>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <cpu>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>on</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>off</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='maximumMigratable'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>on</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>off</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <vendor>AMD</vendor>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='succor'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <mode name='custom' supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ddpd-u'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sha512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm3'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sm4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cooperlake'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Denverton-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amd-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='auto-ibrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='perfmon-v2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbpb'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='stibp-always-on'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='EPYC-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-128'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-256'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx10-512'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='prefetchiti'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Haswell-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='KnightsMill'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512er'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512pf'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fma4'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tbm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xop'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='amx-tile'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-bf16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-fp16'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bitalg'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrc'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fzrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='la57'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='taa-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ifma'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cmpccxadd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fbsdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='fsrs'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ibrs-all'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='intel-psfd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='lam'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mcdt-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pbrsb-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='psdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='serialize'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vaes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='hle'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='rtm'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512bw'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512cd'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512dq'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512f'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='avx512vl'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='invpcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pcid'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='pku'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='mpx'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='core-capability'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='split-lock-detect'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='cldemote'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='erms'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='gfni'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdir64b'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='movdiri'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='xsaves'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='athlon'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='athlon-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='core2duo'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='core2duo-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='coreduo'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='coreduo-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='n270'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='n270-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='ss'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='phenom'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <blockers model='phenom-v1'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnow'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <feature name='3dnowext'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </blockers>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </mode>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </cpu>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <memoryBacking supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <enum name='sourceType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>file</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>anonymous</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <value>memfd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </memoryBacking>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <devices>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <disk supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='diskDevice'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>disk</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>cdrom</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>floppy</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>lun</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='bus'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>fdc</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>scsi</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>usb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>sata</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </disk>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <graphics supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vnc</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>egl-headless</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>dbus</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </graphics>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <video supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='modelType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vga</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>cirrus</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>none</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>bochs</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>ramfb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </video>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <hostdev supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='mode'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>subsystem</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='startupPolicy'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>default</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>mandatory</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>requisite</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>optional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='subsysType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>usb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pci</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>scsi</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='capsType'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='pciBackend'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </hostdev>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <rng supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtio-non-transitional</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>random</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>egd</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>builtin</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </rng>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <filesystem supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='driverType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>path</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>handle</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>virtiofs</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </filesystem>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <tpm supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tpm-tis</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tpm-crb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>emulator</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>external</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendVersion'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>2.0</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </tpm>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <redirdev supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='bus'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>usb</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </redirdev>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <channel supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pty</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>unix</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </channel>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <crypto supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>qemu</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendModel'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>builtin</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </crypto>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <interface supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='backendType'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>default</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>passt</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </interface>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <panic supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='model'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>isa</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>hyperv</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </panic>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <console supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='type'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>null</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vc</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pty</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>dev</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>file</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>pipe</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>stdio</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>udp</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tcp</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>unix</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>qemu-vdagent</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>dbus</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </console>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </devices>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <features>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <gic supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <genid supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <backup supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <async-teardown supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <s390-pv supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <ps2 supported='yes'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <tdx supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <sev supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <sgx supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <hyperv supported='yes'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <enum name='features'>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>relaxed</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vapic</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>spinlocks</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vpindex</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>runtime</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>synic</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>stimer</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>reset</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>vendor_id</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>frequencies</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>reenlightenment</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>tlbflush</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>ipi</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>avic</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>emsr_bitmap</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <value>xmm_input</value>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </enum>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      <defaults>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:      </defaults>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    </hyperv>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:    <launchSecurity supported='no'/>
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  </features>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: </domainCapabilities>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.570 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.570 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.570 225417 DEBUG nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.578 225417 INFO nova.virt.libvirt.host [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Secure Boot support detected#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.580 225417 INFO nova.virt.libvirt.driver [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.580 225417 INFO nova.virt.libvirt.driver [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.590 225417 DEBUG nova.virt.libvirt.driver [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] cpu compare xml: <cpu match="exact">
Jan 22 08:55:18 np0005592159 nova_compute[225413]:  <model>Nehalem</model>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: </cpu>
Jan 22 08:55:18 np0005592159 nova_compute[225413]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.592 225417 DEBUG nova.virt.libvirt.driver [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.663 225417 INFO nova.virt.node [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Determined node identity d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from /var/lib/nova/compute_id#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.684 225417 WARNING nova.compute.manager [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Compute nodes ['d4dcb68c-0009-4467-a6f7-0e9fe0236fbc'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.729 225417 INFO nova.compute.manager [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 22 08:55:18 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.773 225417 WARNING nova.compute.manager [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.774 225417 DEBUG oslo_concurrency.lockutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.774 225417 DEBUG oslo_concurrency.lockutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.774 225417 DEBUG oslo_concurrency.lockutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.774 225417 DEBUG nova.compute.resource_tracker [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 08:55:18 np0005592159 nova_compute[225413]: 2026-01-22 13:55:18.775 225417 DEBUG oslo_concurrency.processutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 08:55:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:18.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:19.133+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:19 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 08:55:19 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/729966866' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 08:55:19 np0005592159 nova_compute[225413]: 2026-01-22 13:55:19.184 225417 DEBUG oslo_concurrency.processutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 08:55:19 np0005592159 systemd[1]: Starting libvirt nodedev daemon...
Jan 22 08:55:19 np0005592159 systemd[1]: Started libvirt nodedev daemon.
Jan 22 08:55:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:19.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:19 np0005592159 nova_compute[225413]: 2026-01-22 13:55:19.697 225417 WARNING nova.virt.libvirt.driver [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 08:55:19 np0005592159 nova_compute[225413]: 2026-01-22 13:55:19.698 225417 DEBUG nova.compute.resource_tracker [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5277MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 08:55:19 np0005592159 nova_compute[225413]: 2026-01-22 13:55:19.698 225417 DEBUG oslo_concurrency.lockutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:55:19 np0005592159 nova_compute[225413]: 2026-01-22 13:55:19.698 225417 DEBUG oslo_concurrency.lockutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:55:19 np0005592159 python3.9[226318]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 22 08:55:19 np0005592159 systemd[1]: Stopping nova_compute container...
Jan 22 08:55:19 np0005592159 nova_compute[225413]: 2026-01-22 13:55:19.875 225417 WARNING nova.compute.resource_tracker [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] No compute node record for compute-2.ctlplane.example.com:d4dcb68c-0009-4467-a6f7-0e9fe0236fbc: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host d4dcb68c-0009-4467-a6f7-0e9fe0236fbc could not be found.#033[00m
Jan 22 08:55:20 np0005592159 nova_compute[225413]: 2026-01-22 13:55:20.083 225417 INFO nova.compute.resource_tracker [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Compute node record created for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com with uuid: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc#033[00m
Jan 22 08:55:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:20.105+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:20 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:20 np0005592159 nova_compute[225413]: 2026-01-22 13:55:20.472 225417 DEBUG oslo_concurrency.lockutils [None req-68382427-a0a2-48e0-a70f-2ed2bc6caab9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:55:20 np0005592159 nova_compute[225413]: 2026-01-22 13:55:20.472 225417 DEBUG oslo_concurrency.lockutils [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 08:55:20 np0005592159 nova_compute[225413]: 2026-01-22 13:55:20.473 225417 DEBUG oslo_concurrency.lockutils [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 08:55:20 np0005592159 nova_compute[225413]: 2026-01-22 13:55:20.473 225417 DEBUG oslo_concurrency.lockutils [None req-ee1f4a76-154f-4892-8104-e11453778766 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 08:55:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:20.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:20 np0005592159 virtqemud[225907]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 22 08:55:20 np0005592159 virtqemud[225907]: hostname: compute-2
Jan 22 08:55:20 np0005592159 virtqemud[225907]: End of file while reading data: Input/output error
Jan 22 08:55:20 np0005592159 systemd[1]: libpod-572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649.scope: Deactivated successfully.
Jan 22 08:55:20 np0005592159 systemd[1]: libpod-572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649.scope: Consumed 3.619s CPU time.
Jan 22 08:55:20 np0005592159 podman[226323]: 2026-01-22 13:55:20.869987571 +0000 UTC m=+1.000893923 container died 572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=nova_compute)
Jan 22 08:55:20 np0005592159 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649-userdata-shm.mount: Deactivated successfully.
Jan 22 08:55:20 np0005592159 systemd[1]: var-lib-containers-storage-overlay-c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b-merged.mount: Deactivated successfully.
Jan 22 08:55:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:21.086+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:21 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:21 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:21 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:21.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:22.040+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:22 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:22 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:22 np0005592159 podman[226323]: 2026-01-22 13:55:22.204271854 +0000 UTC m=+2.335178206 container cleanup 572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 22 08:55:22 np0005592159 podman[226323]: nova_compute
Jan 22 08:55:22 np0005592159 podman[226404]: nova_compute
Jan 22 08:55:22 np0005592159 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 22 08:55:22 np0005592159 systemd[1]: Stopped nova_compute container.
Jan 22 08:55:22 np0005592159 systemd[1]: Starting nova_compute container...
Jan 22 08:55:22 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:55:22 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:22 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:22 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:22 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:22 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6c548d1f25210951fff7cdd77840abeaccd4dd3dbddfe66f57affb74e2fc25b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:22 np0005592159 podman[226417]: 2026-01-22 13:55:22.379101702 +0000 UTC m=+0.084987919 container init 572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 22 08:55:22 np0005592159 podman[226417]: 2026-01-22 13:55:22.386094603 +0000 UTC m=+0.091980820 container start 572ffe12c89ef3d651b3d5a5d0d084d01048037ddf29c596a9682c34d685f649 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 22 08:55:22 np0005592159 podman[226417]: nova_compute
Jan 22 08:55:22 np0005592159 nova_compute[226433]: + sudo -E kolla_set_configs
Jan 22 08:55:22 np0005592159 systemd[1]: Started nova_compute container.
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Validating config file
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Copying service configuration files
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Deleting /etc/ceph
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Creating directory /etc/ceph
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /etc/ceph
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Writing out command to execute
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:22 np0005592159 nova_compute[226433]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 22 08:55:22 np0005592159 nova_compute[226433]: ++ cat /run_command
Jan 22 08:55:22 np0005592159 nova_compute[226433]: + CMD=nova-compute
Jan 22 08:55:22 np0005592159 nova_compute[226433]: + ARGS=
Jan 22 08:55:22 np0005592159 nova_compute[226433]: + sudo kolla_copy_cacerts
Jan 22 08:55:22 np0005592159 nova_compute[226433]: + [[ ! -n '' ]]
Jan 22 08:55:22 np0005592159 nova_compute[226433]: + . kolla_extend_start
Jan 22 08:55:22 np0005592159 nova_compute[226433]: + echo 'Running command: '\''nova-compute'\'''
Jan 22 08:55:22 np0005592159 nova_compute[226433]: Running command: 'nova-compute'
Jan 22 08:55:22 np0005592159 nova_compute[226433]: + umask 0022
Jan 22 08:55:22 np0005592159 nova_compute[226433]: + exec nova-compute
Jan 22 08:55:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:22.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:23.008+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:23 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 08:55:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:23.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 08:55:24 np0005592159 podman[226497]: 2026-01-22 13:55:24.006386849 +0000 UTC m=+0.061515666 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 08:55:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:24.029+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:24 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:24 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:24 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:24 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:24 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:24 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 1113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 08:55:24 np0005592159 nova_compute[226433]: 2026-01-22 13:55:24.359 226437 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:24 np0005592159 nova_compute[226433]: 2026-01-22 13:55:24.359 226437 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:24 np0005592159 nova_compute[226433]: 2026-01-22 13:55:24.360 226437 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Jan 22 08:55:24 np0005592159 nova_compute[226433]: 2026-01-22 13:55:24.360 226437 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Jan 22 08:55:24 np0005592159 python3.9[226618]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 22 08:55:24 np0005592159 nova_compute[226433]: 2026-01-22 13:55:24.494 226437 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 08:55:24 np0005592159 nova_compute[226433]: 2026-01-22 13:55:24.506 226437 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 08:55:24 np0005592159 nova_compute[226433]: 2026-01-22 13:55:24.506 226437 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Jan 22 08:55:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:55:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:24.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:55:24 np0005592159 systemd[1]: Started libpod-conmon-384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4.scope.
Jan 22 08:55:24 np0005592159 systemd[1]: Started libcrun container.
Jan 22 08:55:25 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ef51e93815f3150636214dda9f67bb2eda1e63be496527cf70f833ffe953ce/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:25 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ef51e93815f3150636214dda9f67bb2eda1e63be496527cf70f833ffe953ce/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:25 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89ef51e93815f3150636214dda9f67bb2eda1e63be496527cf70f833ffe953ce/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 22 08:55:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:25.021+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:25 np0005592159 podman[226649]: 2026-01-22 13:55:25.025103 +0000 UTC m=+0.469588513 container init 384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 08:55:25 np0005592159 podman[226649]: 2026-01-22 13:55:25.034272671 +0000 UTC m=+0.478758184 container start 384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, io.buildah.version=1.41.3)
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.040 226437 INFO nova.virt.driver [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Jan 22 08:55:25 np0005592159 python3.9[226618]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 22 08:55:25 np0005592159 nova_compute_init[226671]: INFO:nova_statedir:Applying nova statedir ownership
Jan 22 08:55:25 np0005592159 nova_compute_init[226671]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 22 08:55:25 np0005592159 nova_compute_init[226671]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 22 08:55:25 np0005592159 nova_compute_init[226671]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 22 08:55:25 np0005592159 nova_compute_init[226671]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 22 08:55:25 np0005592159 nova_compute_init[226671]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 22 08:55:25 np0005592159 nova_compute_init[226671]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 22 08:55:25 np0005592159 nova_compute_init[226671]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 22 08:55:25 np0005592159 nova_compute_init[226671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 22 08:55:25 np0005592159 nova_compute_init[226671]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 22 08:55:25 np0005592159 nova_compute_init[226671]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 22 08:55:25 np0005592159 nova_compute_init[226671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 22 08:55:25 np0005592159 nova_compute_init[226671]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 22 08:55:25 np0005592159 nova_compute_init[226671]: INFO:nova_statedir:Nova statedir ownership complete
Jan 22 08:55:25 np0005592159 systemd[1]: libpod-384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4.scope: Deactivated successfully.
Jan 22 08:55:25 np0005592159 podman[226683]: 2026-01-22 13:55:25.131340849 +0000 UTC m=+0.029999622 container died 384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init)
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.153 226437 INFO nova.compute.provider_config [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.165 226437 DEBUG oslo_concurrency.lockutils [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.165 226437 DEBUG oslo_concurrency.lockutils [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.165 226437 DEBUG oslo_concurrency.lockutils [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.166 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.166 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.166 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.166 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.166 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.166 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.167 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.167 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.167 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.167 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.167 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.167 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.167 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.168 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.168 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.168 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.168 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.168 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.168 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.168 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.169 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.169 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.169 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.169 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.169 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.169 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.169 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.170 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.170 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.170 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.170 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.170 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.170 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.171 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.171 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.171 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.171 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.171 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.171 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.171 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.172 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.172 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.172 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.172 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.172 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.172 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.173 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.173 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.173 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.173 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.173 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.173 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.174 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.174 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.174 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.174 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.174 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.174 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.174 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.175 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.176 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.176 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.176 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.176 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.176 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.176 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.176 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.177 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.177 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.177 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.177 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.177 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.177 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.177 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.178 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.178 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.178 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.178 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.178 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.178 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.179 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.180 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.180 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.180 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.180 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.180 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.180 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.180 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.181 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.182 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.182 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.182 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.182 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.182 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.182 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.182 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.183 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.183 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.183 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.183 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.183 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.183 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.183 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.184 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.184 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.184 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.184 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.184 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.184 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.184 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.185 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.185 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.185 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.185 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.185 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.186 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.186 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.186 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.186 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.186 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.186 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.187 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.187 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.187 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.187 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.187 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.187 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.187 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.188 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.188 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.188 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.188 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.188 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.188 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.188 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.189 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.189 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.189 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.189 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.189 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.189 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.190 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.190 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.190 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.190 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.190 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.190 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.190 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.191 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.191 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.191 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.191 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.191 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.191 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.191 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.192 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4-userdata-shm.mount: Deactivated successfully.
Jan 22 08:55:25 np0005592159 systemd[1]: var-lib-containers-storage-overlay-89ef51e93815f3150636214dda9f67bb2eda1e63be496527cf70f833ffe953ce-merged.mount: Deactivated successfully.
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.193 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.193 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.194 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.194 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.194 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.194 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.195 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.195 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.195 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.195 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.195 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.196 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.196 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.196 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.196 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.196 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.196 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.197 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.197 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.197 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.197 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.198 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.198 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.198 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.198 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.198 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.198 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.199 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.199 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.199 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.199 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.199 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.200 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.200 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.200 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.200 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.200 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.200 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.200 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.201 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.201 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.201 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.201 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.201 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.201 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.201 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.202 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.202 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.202 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.202 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.202 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.202 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.202 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.203 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.203 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.203 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.203 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.203 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.203 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.203 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.204 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.204 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.204 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.204 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.204 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.205 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.205 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.205 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.205 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.205 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.205 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.205 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.206 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.206 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.206 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.206 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.206 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.206 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.206 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.207 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.207 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.207 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.207 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.207 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.207 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.207 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.208 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.208 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.208 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.208 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.208 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.208 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.208 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.209 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.209 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.209 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.209 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.209 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.209 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.209 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.210 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.210 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.210 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.210 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.210 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.210 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.210 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.211 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.212 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.212 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.212 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.212 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.212 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.212 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.212 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.213 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.213 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.213 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.213 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.213 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.213 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.213 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.214 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.215 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.215 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.215 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.215 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.215 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.215 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.215 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.216 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.216 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.216 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.216 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.216 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.216 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.216 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.217 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.217 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.217 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.217 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.217 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.217 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.217 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.218 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.218 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.218 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.218 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.218 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.218 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.218 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.219 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.220 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.220 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.220 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.220 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.220 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.220 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.221 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.221 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.221 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.221 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.221 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.221 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.222 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.223 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.223 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.223 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.223 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.223 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.223 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.223 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.224 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.225 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.225 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.225 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.225 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.225 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.225 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.226 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.227 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.227 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.227 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.227 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.227 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.227 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.227 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.228 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.228 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.228 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.228 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.228 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.228 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.228 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.229 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.229 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.229 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.229 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.229 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.229 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.229 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.230 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.230 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.230 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.230 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.230 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.230 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.230 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.231 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.232 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.232 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.232 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.232 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.232 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.232 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.232 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.233 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.233 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.233 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.233 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.233 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.233 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.234 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.234 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.234 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.234 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.234 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.cpu_mode               = custom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.235 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.235 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.cpu_models             = ['Nehalem'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.235 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.235 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.235 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.235 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.235 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.236 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.236 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.236 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.236 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.236 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.236 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.236 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.237 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.237 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.237 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.237 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.237 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.237 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.238 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.238 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.238 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.238 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.238 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.238 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.238 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.239 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.239 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.239 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.239 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.239 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.239 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.240 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.240 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.240 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.240 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.240 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.240 226437 WARNING oslo_config.cfg [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 22 08:55:25 np0005592159 nova_compute[226433]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 22 08:55:25 np0005592159 nova_compute[226433]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 22 08:55:25 np0005592159 nova_compute[226433]: and ``live_migration_inbound_addr`` respectively.
Jan 22 08:55:25 np0005592159 nova_compute[226433]: ).  Its value may be silently ignored in the future.#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.241 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.241 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.241 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.241 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.241 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.241 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.242 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.242 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.242 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.242 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.242 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.242 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.243 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.243 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.243 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.243 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.243 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.243 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.243 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rbd_secret_uuid        = 088fe176-0106-5401-803c-2da38b73b76a log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.244 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.244 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.244 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.244 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.244 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.244 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.245 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.245 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.245 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.245 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.245 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.245 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.246 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.246 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.246 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.246 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.246 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.246 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.247 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.247 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.247 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.247 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.247 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.247 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.247 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.248 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.248 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.248 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.248 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.248 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.248 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.249 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.249 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.249 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.249 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.249 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.249 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.249 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.250 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.250 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.250 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.251 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.251 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.251 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.251 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.251 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.251 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.252 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.252 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.252 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.252 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.252 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.252 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.252 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.253 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.253 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.253 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.253 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.253 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.253 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.254 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.254 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.254 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.254 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.254 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.254 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.255 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.255 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.255 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.255 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.255 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.255 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.255 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.256 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.256 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.256 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.256 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.256 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.256 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.256 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.257 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.258 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.258 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.258 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.258 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.258 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.258 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.258 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.259 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.259 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.259 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.259 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 podman[226683]: 2026-01-22 13:55:25.259919081 +0000 UTC m=+0.158577834 container cleanup 384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.259 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.260 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.260 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.260 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.260 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.260 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.261 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.261 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.261 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.261 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.261 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.261 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.262 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.262 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.262 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.262 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.262 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.262 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.262 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.263 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.263 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.263 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.263 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.264 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.264 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.264 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.264 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.264 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.264 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 systemd[1]: libpod-conmon-384311074c185cc2bd08af1e04f8bece9d73e2ea32d868979213354237efbac4.scope: Deactivated successfully.
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.265 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.265 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.265 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.265 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.265 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.265 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.266 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.266 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.266 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.266 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.266 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.267 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.267 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.267 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.267 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.267 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.267 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.268 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.268 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.268 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.268 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.268 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.268 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.268 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.269 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.269 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.269 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.269 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.269 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.269 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.269 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.270 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.270 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.270 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.270 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.270 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.270 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.271 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.271 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.271 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.271 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.271 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.271 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.271 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.272 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.272 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.272 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.272 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.272 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.272 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.273 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.273 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.273 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.273 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.273 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.273 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.273 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.274 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.274 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.274 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.274 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.274 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.274 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.274 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.275 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.275 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.275 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.275 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.275 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.275 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.275 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.276 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.277 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.277 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.277 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.277 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.277 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.277 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.277 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.278 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.278 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.278 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.278 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.278 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.278 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.278 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.279 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.279 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.279 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.279 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.279 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.279 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.279 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.280 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.280 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.280 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.280 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.280 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.280 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.281 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.281 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.281 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.281 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.281 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.282 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.282 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.282 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.282 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.283 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.284 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.284 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.284 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.284 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.284 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.284 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.285 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.285 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.285 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.285 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.285 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.285 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.286 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.286 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.286 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.286 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.286 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.286 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.286 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.287 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.287 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.287 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.287 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.287 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.287 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.287 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.288 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.288 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.288 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.288 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.288 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.288 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.288 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.289 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.289 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.289 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.289 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.289 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.290 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.291 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.291 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.291 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.291 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.291 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.291 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.291 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.292 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.292 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.292 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.292 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.292 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.292 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.292 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.293 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.293 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.293 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.293 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.293 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.293 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.293 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.294 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.294 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.294 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.294 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.294 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.294 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.294 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.295 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.295 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.295 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.295 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.295 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.295 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.295 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.296 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.296 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.296 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.296 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.296 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.296 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.297 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.297 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.297 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.297 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.297 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.297 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.298 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.298 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.298 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.298 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.298 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.298 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.298 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.299 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.299 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.299 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.299 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.299 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.299 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.299 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.300 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.301 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.301 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.301 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.301 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.301 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.301 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.301 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.302 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.302 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.302 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.302 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.302 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.302 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.302 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.303 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.303 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.303 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.303 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.303 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.303 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.303 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.304 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.304 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.304 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.304 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.304 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.304 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.305 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.305 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.305 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.305 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.305 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.305 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.306 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.306 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.306 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.306 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.306 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.306 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.306 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.307 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.307 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.307 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.307 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.307 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.307 226437 DEBUG oslo_service.service [None req-3335f40b-d7fd-4d0c-b63a-549ffa0b6118 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.308 226437 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.378 226437 INFO nova.virt.node [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Determined node identity d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from /var/lib/nova/compute_id#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.379 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.380 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.380 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.380 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.391 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fdd7ca57070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.393 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fdd7ca57070> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.394 226437 INFO nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Connection event '1' reason 'None'#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.401 226437 INFO nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Libvirt host capabilities <capabilities>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <host>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <uuid>5492a354-d192-4c48-8602-99be1884b049</uuid>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <cpu>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <arch>x86_64</arch>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model>EPYC-Rome-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <vendor>AMD</vendor>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <microcode version='16777317'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <signature family='23' model='49' stepping='0'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <maxphysaddr mode='emulate' bits='40'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='x2apic'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='tsc-deadline'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='osxsave'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='hypervisor'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='tsc_adjust'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='spec-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='stibp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='arch-capabilities'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='ssbd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='cmp_legacy'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='topoext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='virt-ssbd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='lbrv'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='tsc-scale'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='vmcb-clean'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='pause-filter'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='pfthreshold'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='svme-addr-chk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='rdctl-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='skip-l1dfl-vmentry'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='mds-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature name='pschange-mc-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <pages unit='KiB' size='4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <pages unit='KiB' size='2048'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <pages unit='KiB' size='1048576'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </cpu>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <power_management>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <suspend_mem/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </power_management>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <iommu support='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <migration_features>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <live/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <uri_transports>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <uri_transport>tcp</uri_transport>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <uri_transport>rdma</uri_transport>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </uri_transports>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </migration_features>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <topology>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <cells num='1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <cell id='0'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:          <memory unit='KiB'>7864312</memory>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:          <pages unit='KiB' size='4'>1966078</pages>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:          <pages unit='KiB' size='2048'>0</pages>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:          <pages unit='KiB' size='1048576'>0</pages>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:          <distances>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:            <sibling id='0' value='10'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:          </distances>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:          <cpus num='8'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:          </cpus>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        </cell>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </cells>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </topology>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <cache>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </cache>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <secmodel>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model>selinux</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <doi>0</doi>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </secmodel>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <secmodel>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model>dac</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <doi>0</doi>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <baselabel type='kvm'>+107:+107</baselabel>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <baselabel type='qemu'>+107:+107</baselabel>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </secmodel>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </host>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <guest>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <os_type>hvm</os_type>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <arch name='i686'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <wordsize>32</wordsize>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <domain type='qemu'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <domain type='kvm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </arch>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <features>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <pae/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <nonpae/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <acpi default='on' toggle='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <apic default='on' toggle='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <cpuselection/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <deviceboot/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <disksnapshot default='on' toggle='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <externalSnapshot/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </features>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </guest>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <guest>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <os_type>hvm</os_type>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <arch name='x86_64'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <wordsize>64</wordsize>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <domain type='qemu'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <domain type='kvm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </arch>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <features>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <acpi default='on' toggle='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <apic default='on' toggle='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <cpuselection/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <deviceboot/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <disksnapshot default='on' toggle='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <externalSnapshot/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </features>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </guest>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 
Jan 22 08:55:25 np0005592159 nova_compute[226433]: </capabilities>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: #033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.409 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.413 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 22 08:55:25 np0005592159 nova_compute[226433]: <domainCapabilities>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <domain>kvm</domain>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <arch>i686</arch>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <vcpu max='4096'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <iothreads supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <os supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <enum name='firmware'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <loader supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>rom</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pflash</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='readonly'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>yes</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>no</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='secure'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>no</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </loader>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </os>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <cpu>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>on</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>off</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='maximumMigratable'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>on</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>off</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <vendor>AMD</vendor>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='succor'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='custom' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cooperlake'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='KnightsMill'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='athlon'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='athlon-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='core2duo'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='core2duo-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='coreduo'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='coreduo-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='n270'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='n270-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='phenom'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='phenom-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </cpu>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <memoryBacking supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <enum name='sourceType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>file</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>anonymous</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>memfd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </memoryBacking>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <devices>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <disk supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='diskDevice'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>disk</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>cdrom</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>floppy</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>lun</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='bus'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>fdc</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>scsi</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>usb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>sata</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-non-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </disk>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <graphics supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vnc</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>egl-headless</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>dbus</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </graphics>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <video supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='modelType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vga</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>cirrus</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>none</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>bochs</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>ramfb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </video>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <hostdev supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='mode'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>subsystem</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='startupPolicy'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>default</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>mandatory</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>requisite</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>optional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='subsysType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>usb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pci</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>scsi</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='capsType'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='pciBackend'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </hostdev>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <rng supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-non-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>random</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>egd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>builtin</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </rng>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <filesystem supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='driverType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>path</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>handle</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtiofs</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </filesystem>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <tpm supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tpm-tis</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tpm-crb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>emulator</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>external</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendVersion'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>2.0</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </tpm>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <redirdev supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='bus'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>usb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </redirdev>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <channel supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pty</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>unix</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </channel>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <crypto supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>qemu</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>builtin</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </crypto>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <interface supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>default</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>passt</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </interface>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <panic supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>isa</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>hyperv</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </panic>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <console supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>null</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vc</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pty</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>dev</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>file</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pipe</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>stdio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>udp</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tcp</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>unix</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>qemu-vdagent</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>dbus</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </console>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </devices>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <features>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <gic supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <genid supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <backup supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <async-teardown supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <s390-pv supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <ps2 supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <tdx supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <sev supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <sgx supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <hyperv supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='features'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>relaxed</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vapic</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>spinlocks</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vpindex</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>runtime</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>synic</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>stimer</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>reset</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vendor_id</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>frequencies</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>reenlightenment</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tlbflush</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>ipi</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>avic</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>emsr_bitmap</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>xmm_input</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <defaults>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </defaults>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </hyperv>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <launchSecurity supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </features>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: </domainCapabilities>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.425 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 22 08:55:25 np0005592159 nova_compute[226433]: <domainCapabilities>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <domain>kvm</domain>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <arch>i686</arch>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <vcpu max='240'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <iothreads supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <os supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <enum name='firmware'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <loader supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>rom</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pflash</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='readonly'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>yes</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>no</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='secure'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>no</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </loader>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </os>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <cpu>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>on</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>off</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='maximumMigratable'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>on</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>off</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <vendor>AMD</vendor>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='succor'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='custom' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cooperlake'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='KnightsMill'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:25.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='athlon'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='athlon-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='core2duo'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='core2duo-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='coreduo'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='coreduo-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='n270'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='n270-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='phenom'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='phenom-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </cpu>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <memoryBacking supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <enum name='sourceType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>file</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>anonymous</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>memfd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </memoryBacking>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <devices>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <disk supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='diskDevice'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>disk</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>cdrom</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>floppy</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>lun</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='bus'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>ide</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>fdc</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>scsi</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>usb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>sata</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-non-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </disk>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <graphics supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vnc</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>egl-headless</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>dbus</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </graphics>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <video supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='modelType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vga</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>cirrus</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>none</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>bochs</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>ramfb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </video>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <hostdev supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='mode'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>subsystem</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='startupPolicy'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>default</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>mandatory</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>requisite</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>optional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='subsysType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>usb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pci</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>scsi</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='capsType'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='pciBackend'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </hostdev>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <rng supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-non-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>random</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>egd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>builtin</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </rng>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <filesystem supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='driverType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>path</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>handle</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtiofs</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </filesystem>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <tpm supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tpm-tis</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tpm-crb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>emulator</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>external</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendVersion'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>2.0</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </tpm>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <redirdev supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='bus'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>usb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </redirdev>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <channel supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pty</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>unix</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </channel>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <crypto supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>qemu</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>builtin</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </crypto>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <interface supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>default</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>passt</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </interface>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <panic supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>isa</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>hyperv</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </panic>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <console supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>null</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vc</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pty</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>dev</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>file</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pipe</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>stdio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>udp</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tcp</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>unix</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>qemu-vdagent</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>dbus</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </console>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </devices>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <features>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <gic supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <genid supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <backup supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <async-teardown supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <s390-pv supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <ps2 supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <tdx supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <sev supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <sgx supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <hyperv supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='features'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>relaxed</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vapic</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>spinlocks</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vpindex</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>runtime</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>synic</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>stimer</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>reset</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vendor_id</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>frequencies</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>reenlightenment</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tlbflush</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>ipi</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>avic</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>emsr_bitmap</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>xmm_input</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <defaults>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </defaults>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </hyperv>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <launchSecurity supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </features>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: </domainCapabilities>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.488 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.491 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 22 08:55:25 np0005592159 nova_compute[226433]: <domainCapabilities>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <domain>kvm</domain>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <machine>pc-q35-rhel9.8.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <arch>x86_64</arch>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <vcpu max='4096'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <iothreads supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <os supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <enum name='firmware'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>efi</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <loader supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>rom</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pflash</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='readonly'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>yes</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>no</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='secure'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>yes</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>no</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </loader>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </os>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <cpu>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>on</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>off</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='maximumMigratable'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>on</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>off</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <vendor>AMD</vendor>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='succor'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='custom' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cooperlake'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='KnightsMill'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='athlon'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='athlon-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='core2duo'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='core2duo-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='coreduo'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='coreduo-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='n270'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='n270-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='phenom'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='phenom-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </cpu>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <memoryBacking supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <enum name='sourceType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>file</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>anonymous</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>memfd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </memoryBacking>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <devices>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <disk supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='diskDevice'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>disk</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>cdrom</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>floppy</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>lun</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='bus'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>fdc</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>scsi</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>usb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>sata</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-non-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </disk>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <graphics supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vnc</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>egl-headless</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>dbus</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </graphics>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <video supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='modelType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vga</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>cirrus</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>none</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>bochs</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>ramfb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </video>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <hostdev supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='mode'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>subsystem</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='startupPolicy'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>default</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>mandatory</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>requisite</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>optional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='subsysType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>usb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pci</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>scsi</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='capsType'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='pciBackend'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </hostdev>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <rng supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-non-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>random</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>egd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>builtin</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </rng>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <filesystem supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='driverType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>path</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>handle</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtiofs</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </filesystem>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <tpm supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tpm-tis</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tpm-crb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>emulator</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>external</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendVersion'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>2.0</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </tpm>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <redirdev supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='bus'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>usb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </redirdev>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <channel supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pty</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>unix</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </channel>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <crypto supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>qemu</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>builtin</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </crypto>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <interface supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>default</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>passt</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </interface>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <panic supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>isa</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>hyperv</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </panic>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <console supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>null</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vc</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pty</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>dev</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>file</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pipe</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>stdio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>udp</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tcp</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>unix</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>qemu-vdagent</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>dbus</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </console>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </devices>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <features>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <gic supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <genid supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <backup supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <async-teardown supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <s390-pv supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <ps2 supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <tdx supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <sev supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <sgx supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <hyperv supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='features'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>relaxed</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vapic</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>spinlocks</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vpindex</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>runtime</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>synic</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>stimer</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>reset</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vendor_id</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>frequencies</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>reenlightenment</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tlbflush</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>ipi</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>avic</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>emsr_bitmap</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>xmm_input</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <defaults>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </defaults>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </hyperv>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <launchSecurity supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </features>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: </domainCapabilities>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.571 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 22 08:55:25 np0005592159 nova_compute[226433]: <domainCapabilities>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <path>/usr/libexec/qemu-kvm</path>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <domain>kvm</domain>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <machine>pc-i440fx-rhel7.6.0</machine>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <arch>x86_64</arch>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <vcpu max='240'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <iothreads supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <os supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <enum name='firmware'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <loader supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>rom</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pflash</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='readonly'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>yes</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>no</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='secure'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>no</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </loader>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </os>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <cpu>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='host-passthrough' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='hostPassthroughMigratable'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>on</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>off</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='maximum' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='maximumMigratable'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>on</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>off</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='host-model' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model fallback='forbid'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <vendor>AMD</vendor>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <maxphysaddr mode='passthrough' limit='40'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='x2apic'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='tsc-deadline'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='hypervisor'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='tsc_adjust'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='spec-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='stibp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='ssbd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='cmp_legacy'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='overflow-recov'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='succor'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='amd-ssbd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='virt-ssbd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='lbrv'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='tsc-scale'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='vmcb-clean'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='flushbyasid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='pause-filter'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='pfthreshold'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='svme-addr-chk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='require' name='lfence-always-serializing'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <feature policy='disable' name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <mode name='custom' supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Broadwell-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cascadelake-Server-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='ClearwaterForest'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='ClearwaterForest-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ddpd-u'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sha512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm3'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sm4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cooperlake'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cooperlake-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Cooperlake-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Denverton-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Dhyana-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Genoa'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Genoa-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Genoa-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Milan-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Rome-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Turin'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-Turin-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amd-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='auto-ibrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vp2intersect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fs-gs-base-ns'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibpb-brtype'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='no-nested-data-bp'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='null-sel-clr-base'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='perfmon-v2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbpb'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='srso-user-kernel-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='stibp-always-on'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='EPYC-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='GraniteRapids-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-128'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-256'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx10-512'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='prefetchiti'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Haswell-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-noTSX'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v6'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Icelake-Server-v7'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='IvyBridge-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='KnightsMill'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='KnightsMill-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4fmaps'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-4vnniw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512er'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512pf'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G4-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Opteron_G5-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fma4'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tbm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xop'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SapphireRapids-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='amx-tile'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-bf16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-fp16'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512-vpopcntdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bitalg'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vbmi2'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrc'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fzrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='la57'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='taa-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='tsx-ldtrk'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='SierraForest-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ifma'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-ne-convert'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx-vnni-int8'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bhi-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='bus-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cmpccxadd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fbsdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='fsrs'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ibrs-all'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='intel-psfd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ipred-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='lam'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mcdt-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pbrsb-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='psdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rrsba-ctrl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='sbdr-ssdp-no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='serialize'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vaes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='vpclmulqdq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Client-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='hle'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='rtm'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Skylake-Server-v5'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512bw'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512cd'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512dq'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512f'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='avx512vl'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='invpcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pcid'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='pku'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='mpx'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v2'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v3'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='core-capability'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='split-lock-detect'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='Snowridge-v4'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='cldemote'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='erms'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='gfni'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdir64b'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='movdiri'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='xsaves'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='athlon'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='athlon-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='core2duo'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='core2duo-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='coreduo'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='coreduo-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='n270'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='n270-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='ss'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='phenom'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <blockers model='phenom-v1'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnow'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <feature name='3dnowext'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </blockers>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </mode>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </cpu>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <memoryBacking supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <enum name='sourceType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>file</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>anonymous</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <value>memfd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </memoryBacking>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <devices>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <disk supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='diskDevice'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>disk</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>cdrom</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>floppy</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>lun</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='bus'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>ide</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>fdc</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>scsi</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>usb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>sata</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-non-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </disk>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <graphics supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vnc</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>egl-headless</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>dbus</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </graphics>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <video supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='modelType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vga</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>cirrus</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>none</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>bochs</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>ramfb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </video>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <hostdev supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='mode'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>subsystem</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='startupPolicy'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>default</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>mandatory</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>requisite</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>optional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='subsysType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>usb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pci</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>scsi</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='capsType'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='pciBackend'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </hostdev>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <rng supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtio-non-transitional</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>random</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>egd</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>builtin</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </rng>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <filesystem supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='driverType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>path</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>handle</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>virtiofs</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </filesystem>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <tpm supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tpm-tis</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tpm-crb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>emulator</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>external</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendVersion'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>2.0</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </tpm>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <redirdev supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='bus'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>usb</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </redirdev>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <channel supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pty</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>unix</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </channel>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <crypto supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>qemu</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendModel'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>builtin</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </crypto>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <interface supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='backendType'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>default</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>passt</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </interface>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <panic supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='model'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>isa</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>hyperv</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </panic>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <console supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='type'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>null</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vc</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pty</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>dev</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>file</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>pipe</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>stdio</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>udp</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tcp</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>unix</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>qemu-vdagent</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>dbus</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </console>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </devices>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <features>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <gic supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <vmcoreinfo supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <genid supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <backingStoreInput supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <backup supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <async-teardown supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <s390-pv supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <ps2 supported='yes'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <tdx supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <sev supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <sgx supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <hyperv supported='yes'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <enum name='features'>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>relaxed</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vapic</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>spinlocks</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vpindex</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>runtime</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>synic</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>stimer</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>reset</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>vendor_id</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>frequencies</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>reenlightenment</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>tlbflush</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>ipi</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>avic</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>emsr_bitmap</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <value>xmm_input</value>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </enum>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      <defaults>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <spinlocks>4095</spinlocks>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <stimer_direct>on</stimer_direct>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <tlbflush_direct>on</tlbflush_direct>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <tlbflush_extended>on</tlbflush_extended>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:        <vendor_id>Linux KVM Hv</vendor_id>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:      </defaults>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    </hyperv>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:    <launchSecurity supported='no'/>
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  </features>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: </domainCapabilities>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.634 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.634 226437 INFO nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Secure Boot support detected#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.636 226437 DEBUG nova.virt.libvirt.volume.mount [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.637 226437 INFO nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.637 226437 INFO nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.649 226437 DEBUG nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] cpu compare xml: <cpu match="exact">
Jan 22 08:55:25 np0005592159 nova_compute[226433]:  <model>Nehalem</model>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: </cpu>
Jan 22 08:55:25 np0005592159 nova_compute[226433]: _compare_cpu /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10019#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.651 226437 DEBUG nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.700 226437 INFO nova.virt.node [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Determined node identity d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from /var/lib/nova/compute_id#033[00m
Jan 22 08:55:25 np0005592159 nova_compute[226433]: 2026-01-22 13:55:25.795 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Verified node d4dcb68c-0009-4467-a6f7-0e9fe0236fbc matches my host compute-2.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Jan 22 08:55:25 np0005592159 systemd[1]: session-49.scope: Deactivated successfully.
Jan 22 08:55:25 np0005592159 systemd[1]: session-49.scope: Consumed 2min 615ms CPU time.
Jan 22 08:55:25 np0005592159 systemd-logind[787]: Session 49 logged out. Waiting for processes to exit.
Jan 22 08:55:25 np0005592159 systemd-logind[787]: Removed session 49.
Jan 22 08:55:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:25.981+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:25 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:26 np0005592159 nova_compute[226433]: 2026-01-22 13:55:26.062 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Jan 22 08:55:26 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:26 np0005592159 nova_compute[226433]: 2026-01-22 13:55:26.795 226437 ERROR nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Could not retrieve compute node resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'd4dcb68c-0009-4467-a6f7-0e9fe0236fbc' not found: No resource provider with uuid d4dcb68c-0009-4467-a6f7-0e9fe0236fbc found  ", "request_id": "req-77ac1ed1-4613-4939-b9ce-bd0ba145b90b"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'd4dcb68c-0009-4467-a6f7-0e9fe0236fbc' not found: No resource provider with uuid d4dcb68c-0009-4467-a6f7-0e9fe0236fbc found  ", "request_id": "req-77ac1ed1-4613-4939-b9ce-bd0ba145b90b"}]}#033[00m
Jan 22 08:55:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:26.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:26 np0005592159 nova_compute[226433]: 2026-01-22 13:55:26.825 226437 DEBUG oslo_concurrency.lockutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:55:26 np0005592159 nova_compute[226433]: 2026-01-22 13:55:26.825 226437 DEBUG oslo_concurrency.lockutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:55:26 np0005592159 nova_compute[226433]: 2026-01-22 13:55:26.826 226437 DEBUG oslo_concurrency.lockutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 08:55:26 np0005592159 nova_compute[226433]: 2026-01-22 13:55:26.826 226437 DEBUG nova.compute.resource_tracker [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 08:55:26 np0005592159 nova_compute[226433]: 2026-01-22 13:55:26.826 226437 DEBUG oslo_concurrency.processutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 08:55:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:26.977+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:26 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 08:55:27 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/788234680' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 08:55:27 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:27 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:27 np0005592159 nova_compute[226433]: 2026-01-22 13:55:27.248 226437 DEBUG oslo_concurrency.processutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 08:55:27 np0005592159 nova_compute[226433]: 2026-01-22 13:55:27.392 226437 WARNING nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 08:55:27 np0005592159 nova_compute[226433]: 2026-01-22 13:55:27.393 226437 DEBUG nova.compute.resource_tracker [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5240MB free_disk=20.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 08:55:27 np0005592159 nova_compute[226433]: 2026-01-22 13:55:27.393 226437 DEBUG oslo_concurrency.lockutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 08:55:27 np0005592159 nova_compute[226433]: 2026-01-22 13:55:27.393 226437 DEBUG oslo_concurrency.lockutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 08:55:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 08:55:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:27.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 08:55:27 np0005592159 nova_compute[226433]: 2026-01-22 13:55:27.573 226437 ERROR nova.compute.resource_tracker [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'd4dcb68c-0009-4467-a6f7-0e9fe0236fbc' not found: No resource provider with uuid d4dcb68c-0009-4467-a6f7-0e9fe0236fbc found  ", "request_id": "req-7a92631c-3b10-4c61-9675-104bac57ecff"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider 'd4dcb68c-0009-4467-a6f7-0e9fe0236fbc' not found: No resource provider with uuid d4dcb68c-0009-4467-a6f7-0e9fe0236fbc found  ", "request_id": "req-7a92631c-3b10-4c61-9675-104bac57ecff"}]}#033[00m
Jan 22 08:55:27 np0005592159 nova_compute[226433]: 2026-01-22 13:55:27.574 226437 DEBUG nova.compute.resource_tracker [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 08:55:27 np0005592159 nova_compute[226433]: 2026-01-22 13:55:27.574 226437 DEBUG nova.compute.resource_tracker [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=20GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 08:55:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:27.988+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:27 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:28 np0005592159 nova_compute[226433]: 2026-01-22 13:55:28.263 226437 INFO nova.scheduler.client.report [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [req-8c319d9d-459a-4b8f-91bb-e2a832e80a5c] Created resource provider record via placement API for resource provider with UUID d4dcb68c-0009-4467-a6f7-0e9fe0236fbc and name compute-2.ctlplane.example.com.#033[00m
Jan 22 08:55:28 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:28.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:28 np0005592159 nova_compute[226433]: 2026-01-22 13:55:28.902 226437 DEBUG oslo_concurrency.processutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 08:55:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:28.942+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:28 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 08:55:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3711 writes, 21K keys, 3711 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s#012Cumulative WAL: 3711 writes, 3711 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1676 writes, 8855 keys, 1676 commit groups, 1.0 writes per commit group, ingest: 15.75 MB, 0.03 MB/s#012Interval WAL: 1676 writes, 1676 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     80.5      0.29              0.06        11    0.027       0      0       0.0       0.0#012  L6      1/0    7.98 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    125.1    105.3      0.79              0.20        10    0.079     53K   5365       0.0       0.0#012 Sum      1/0    7.98 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5     91.2     98.6      1.08              0.27        21    0.052     53K   5365       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.5     82.8     82.9      0.70              0.16        12    0.059     35K   3554       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    125.1    105.3      0.79              0.20        10    0.079     53K   5365       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     81.4      0.29              0.06        10    0.029       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.023, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.10 GB write, 0.09 MB/s write, 0.10 GB read, 0.08 MB/s read, 1.1 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 7.02 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(357,6.61 MB,2.17451%) FilterBlock(21,158.98 KB,0.0510718%) IndexBlock(21,261.39 KB,0.0839685%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 08:55:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 08:55:29 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2684260221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 08:55:29 np0005592159 nova_compute[226433]: 2026-01-22 13:55:29.361 226437 DEBUG oslo_concurrency.processutils [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 08:55:29 np0005592159 nova_compute[226433]: 2026-01-22 13:55:29.366 226437 DEBUG nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 22 08:55:29 np0005592159 nova_compute[226433]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Jan 22 08:55:29 np0005592159 nova_compute[226433]: 2026-01-22 13:55:29.366 226437 INFO nova.virt.libvirt.host [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] kernel doesn't support AMD SEV#033[00m
Jan 22 08:55:29 np0005592159 nova_compute[226433]: 2026-01-22 13:55:29.367 226437 DEBUG nova.compute.provider_tree [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 08:55:29 np0005592159 nova_compute[226433]: 2026-01-22 13:55:29.367 226437 DEBUG nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 08:55:29 np0005592159 nova_compute[226433]: 2026-01-22 13:55:29.369 226437 DEBUG nova.virt.libvirt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Libvirt baseline CPU <cpu>
Jan 22 08:55:29 np0005592159 nova_compute[226433]:  <arch>x86_64</arch>
Jan 22 08:55:29 np0005592159 nova_compute[226433]:  <model>Nehalem</model>
Jan 22 08:55:29 np0005592159 nova_compute[226433]:  <vendor>AMD</vendor>
Jan 22 08:55:29 np0005592159 nova_compute[226433]:  <topology sockets="8" cores="1" threads="1"/>
Jan 22 08:55:29 np0005592159 nova_compute[226433]: </cpu>
Jan 22 08:55:29 np0005592159 nova_compute[226433]: _get_guest_baseline_cpu_features /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12537#033[00m
Jan 22 08:55:29 np0005592159 nova_compute[226433]: 2026-01-22 13:55:29.512 226437 DEBUG nova.scheduler.client.report [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Updated inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 20, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Jan 22 08:55:29 np0005592159 nova_compute[226433]: 2026-01-22 13:55:29.513 226437 DEBUG nova.compute.provider_tree [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Updating resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 22 08:55:29 np0005592159 nova_compute[226433]: 2026-01-22 13:55:29.513 226437 DEBUG nova.compute.provider_tree [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 0, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 08:55:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:29.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:29 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:29.927+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:29 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:30 np0005592159 nova_compute[226433]: 2026-01-22 13:55:30.546 226437 DEBUG nova.compute.provider_tree [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Updating resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Jan 22 08:55:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 08:55:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:30.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 08:55:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:30.923+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:30 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:31 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:13:55:31.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T13:55:31.932+0000 7f47f8ed4640 -1 osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:31 np0005592159 ceph-osd[79779]: osd.2 139 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 08:55:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:32 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 08:55:32 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:32 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 2 ])
Jan 22 08:55:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 08:55:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 08:55:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:13:55:32.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 08:55:32 np0005592159 nova_compute[226433]: 2026-01-22 13:55:32.835 226437 DEBUG nova.compute.resource_tracker [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:03:35 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:35.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:36 np0005592159 rsyslogd[1002]: imjournal: 4930 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 22 09:03:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:36.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:36.453+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:36 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:36 np0005592159 podman[231301]: 2026-01-22 14:03:36.988125755 +0000 UTC m=+0.050687167 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 09:03:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:37.489+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:37 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:37.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:38.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:38.534+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:39 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:39.530+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:03:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:39.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:03:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:40.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:40 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:40 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1609 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:40.506+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:41 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:41.475+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:41.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:42.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:42.464+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:42 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:42 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:43.418+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:43 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:43.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:03:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:44.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:03:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:44.461+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:45 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:45 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1614 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:45.445+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:45.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:46.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:46 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:46.417+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:03:47.172 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:03:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:03:47.172 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:03:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:03:47.172 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:03:47 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:47.371+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:03:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:47.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:03:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:03:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:48.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:03:48 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:48 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:48.414+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:49 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:49 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:49 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:49 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:03:49 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:49 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:03:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:49.415+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:03:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:49.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:03:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:03:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:50.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:03:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:50.418+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:50 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:51.435+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:51 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:51 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:03:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:51.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:03:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:52.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:52.434+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:52 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:53.411+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:53.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:53 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:53 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:54.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:54.421+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:55 np0005592159 podman[231510]: 2026-01-22 14:03:55.031059679 +0000 UTC m=+0.092444408 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 09:03:55 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:55.437+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:03:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:55.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:03:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:03:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:56.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:03:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:56.481+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:56 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:56 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:56 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:56 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:03:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:57.449+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:57 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:03:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:57.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:03:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:03:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:03:58.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:03:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:58.480+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:58 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:03:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:03:59.489+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:03:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:59 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:03:59 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:03:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:03:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:03:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:03:59.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:00.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:00.440+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:00 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:01.455+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:01 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:01.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:02.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:02.469+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:03 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:03.430+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:03.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:04 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:04:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:04.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:04:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:04.402+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:05 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:05 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:05 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:05.449+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:05.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:06.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:06.439+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:06 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:07.453+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:07 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:07.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:08 np0005592159 podman[231643]: 2026-01-22 14:04:08.006824766 +0000 UTC m=+0.065288852 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:04:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:08.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:08.423+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:09.416+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:09 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:09.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:04:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:10.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:04:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:10.437+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:10 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:10 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:10 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:11.480+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:11 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 09:04:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:12.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:12 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:12.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:12.480+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:13 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:13.497+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:14.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:14.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:14.476+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:14 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:14 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:15.458+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:04:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:16.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:04:16 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:16 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:16.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:16.439+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:17 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:17.459+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:18.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:18.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:18 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:18.500+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:19 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:19.528+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:04:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:20.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:04:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:20.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:20 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:20 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:20.558+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:21.581+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:21 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:21 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:22.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:22.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:22.539+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:22 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:23.587+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:23 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:24.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:24.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:24.554+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:24 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:24 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1654 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:25 np0005592159 podman[231697]: 2026-01-22 14:04:25.277337981 +0000 UTC m=+0.087865443 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:04:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:25.589+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:25 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:26.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:26.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:26.634+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:27 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:27.640+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:28.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:28.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:28 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:28 np0005592159 nova_compute[226433]: 2026-01-22 14:04:28.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:28.596+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:29 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:29.605+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:30.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:30.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:30 np0005592159 nova_compute[226433]: 2026-01-22 14:04:30.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:30 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:30 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1659 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:30.627+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:31 np0005592159 nova_compute[226433]: 2026-01-22 14:04:31.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:31 np0005592159 nova_compute[226433]: 2026-01-22 14:04:31.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:31 np0005592159 nova_compute[226433]: 2026-01-22 14:04:31.551 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:04:31 np0005592159 nova_compute[226433]: 2026-01-22 14:04:31.551 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:04:31 np0005592159 nova_compute[226433]: 2026-01-22 14:04:31.551 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:04:31 np0005592159 nova_compute[226433]: 2026-01-22 14:04:31.552 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:04:31 np0005592159 nova_compute[226433]: 2026-01-22 14:04:31.552 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:04:31 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:31 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:31.626+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:04:31 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4110622290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:04:32 np0005592159 nova_compute[226433]: 2026-01-22 14:04:32.007 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:04:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:32.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:32 np0005592159 nova_compute[226433]: 2026-01-22 14:04:32.174 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:04:32 np0005592159 nova_compute[226433]: 2026-01-22 14:04:32.175 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5205MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:04:32 np0005592159 nova_compute[226433]: 2026-01-22 14:04:32.175 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:04:32 np0005592159 nova_compute[226433]: 2026-01-22 14:04:32.175 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:04:32 np0005592159 nova_compute[226433]: 2026-01-22 14:04:32.271 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:04:32 np0005592159 nova_compute[226433]: 2026-01-22 14:04:32.272 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:04:32 np0005592159 nova_compute[226433]: 2026-01-22 14:04:32.273 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:04:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:32.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:32 np0005592159 nova_compute[226433]: 2026-01-22 14:04:32.314 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:04:32 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:32.672+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:32 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:04:32 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1738843088' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:04:32 np0005592159 nova_compute[226433]: 2026-01-22 14:04:32.728 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:04:32 np0005592159 nova_compute[226433]: 2026-01-22 14:04:32.733 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:04:32 np0005592159 nova_compute[226433]: 2026-01-22 14:04:32.768 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:04:32 np0005592159 nova_compute[226433]: 2026-01-22 14:04:32.770 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:04:32 np0005592159 nova_compute[226433]: 2026-01-22 14:04:32.770 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:04:33 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:33.702+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:33 np0005592159 nova_compute[226433]: 2026-01-22 14:04:33.767 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:33 np0005592159 nova_compute[226433]: 2026-01-22 14:04:33.767 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:33 np0005592159 nova_compute[226433]: 2026-01-22 14:04:33.768 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:04:33 np0005592159 nova_compute[226433]: 2026-01-22 14:04:33.768 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:04:33 np0005592159 nova_compute[226433]: 2026-01-22 14:04:33.788 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:04:33 np0005592159 nova_compute[226433]: 2026-01-22 14:04:33.789 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:04:33 np0005592159 nova_compute[226433]: 2026-01-22 14:04:33.789 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:33 np0005592159 nova_compute[226433]: 2026-01-22 14:04:33.789 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:33 np0005592159 nova_compute[226433]: 2026-01-22 14:04:33.790 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:04:33 np0005592159 nova_compute[226433]: 2026-01-22 14:04:33.790 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:04:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:34.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:34.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:34.698+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:34 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:34 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1664 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:35.707+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:35 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:36.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:36.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:36.670+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:36 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:37.647+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:38.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:38 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:38.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:38.625+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:38 np0005592159 podman[231803]: 2026-01-22 14:04:38.993378391 +0000 UTC m=+0.053998678 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:04:39 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:39.579+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:40.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:40.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:40 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:40 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1669 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:40.624+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:41 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:41.600+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:42.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:42.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:42 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:42 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:42.553+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:43 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:43.551+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:44.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:44.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:44 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:44.585+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:45.589+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:45 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:45 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1674 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:46.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:46.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:46.568+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:04:47.173 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:04:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:04:47.174 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:04:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:04:47.174 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:04:47 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:47.570+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:48.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:04:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:48.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:04:48 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:48.604+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:49.631+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:49 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:49 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:50.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:50.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:50.636+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:51 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1679 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:51 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:51.684+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:52 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:52.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:52.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:52.666+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:53 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:53.656+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:54.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:54 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:54.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:54.625+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:55 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:55 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1684 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:04:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:55.641+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:56 np0005592159 podman[231880]: 2026-01-22 14:04:56.049564977 +0000 UTC m=+0.112166115 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 09:04:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:56.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:56.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:56 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:56.622+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:57 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:57 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:57 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:57.611+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:04:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:04:58.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:04:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:04:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:04:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:04:58.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:04:58 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:58.625+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:04:59 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:04:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:04:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:04:59.579+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:04:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:00.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:05:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:00.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:05:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:00.557+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:00 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:00 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1689 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:01.559+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:01 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:05:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:05:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:02.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:05:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:02.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:05:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:02.596+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:02 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:03 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:03.641+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:04.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:04.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:04.640+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:04 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:04 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1694 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:05.631+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:05 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:06.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:05:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:06.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:05:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:06.625+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:06 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:07.647+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:07 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:05:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:08.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:05:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:08.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:05:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:08.672+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:09.659+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:10 np0005592159 podman[232263]: 2026-01-22 14:05:10.008917916 +0000 UTC m=+0.067481763 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Jan 22 09:05:10 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:10.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:05:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:10.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:05:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:10.667+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:11 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:11 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1699 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:11 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:11.650+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:12.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:12 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:12.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:12.647+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:13 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:13.678+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:14.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:14.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:14 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:14 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:14.721+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:15.756+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:15 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1704 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:15 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:16.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:16.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:16 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:16.735+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:17.753+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:17 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:18.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:18.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:18.781+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:19 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:19.784+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:05:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:20.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:05:20 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:20 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1709 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:20.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:20.794+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:21 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:21.747+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:22.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:22.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:22.753+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:22 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:23.712+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:23 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:23 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:24.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:05:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:24.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:05:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:24.730+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:25 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:25 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1714 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:25.748+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:26.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:26 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:26.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:26.710+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:27 np0005592159 podman[232344]: 2026-01-22 14:05:27.048445139 +0000 UTC m=+0.109298695 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 09:05:27 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:27.689+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:28.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:28 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:28.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:28.691+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:05:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5573 writes, 31K keys, 5573 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 5573 writes, 5573 syncs, 1.00 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1862 writes, 9404 keys, 1862 commit groups, 1.0 writes per commit group, ingest: 16.86 MB, 0.03 MB/s#012Interval WAL: 1862 writes, 1862 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     86.3      0.38              0.10        16    0.024       0      0       0.0       0.0#012  L6      1/0    8.50 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.8    130.7    109.5      1.15              0.35        15    0.077     86K   7953       0.0       0.0#012 Sum      1/0    8.50 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   4.8     98.2    103.7      1.54              0.45        31    0.050     86K   7953       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.6    115.0    116.2      0.45              0.18        10    0.045     33K   2588       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    130.7    109.5      1.15              0.35        15    0.077     86K   7953       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     87.1      0.38              0.10        15    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.032, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.16 GB write, 0.09 MB/s write, 0.15 GB read, 0.08 MB/s read, 1.5 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 14.31 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000127 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(752,13.68 MB,4.50034%) FilterBlock(31,253.67 KB,0.081489%) IndexBlock(31,393.67 KB,0.126462%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:05:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:29 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:29.707+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:05:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:30.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:05:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:30.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:30 np0005592159 nova_compute[226433]: 2026-01-22 14:05:30.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:30 np0005592159 nova_compute[226433]: 2026-01-22 14:05:30.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:30.700+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:31 np0005592159 nova_compute[226433]: 2026-01-22 14:05:31.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:31 np0005592159 nova_compute[226433]: 2026-01-22 14:05:31.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:31.722+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:31 np0005592159 nova_compute[226433]: 2026-01-22 14:05:31.799 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:05:31 np0005592159 nova_compute[226433]: 2026-01-22 14:05:31.799 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:05:31 np0005592159 nova_compute[226433]: 2026-01-22 14:05:31.800 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:05:31 np0005592159 nova_compute[226433]: 2026-01-22 14:05:31.800 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:05:31 np0005592159 nova_compute[226433]: 2026-01-22 14:05:31.800 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:05:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:32.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:32 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:32 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1719 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:32 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:32.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:32.702+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:05:33 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/330811639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:05:33 np0005592159 nova_compute[226433]: 2026-01-22 14:05:33.326 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:05:33 np0005592159 nova_compute[226433]: 2026-01-22 14:05:33.471 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:05:33 np0005592159 nova_compute[226433]: 2026-01-22 14:05:33.472 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5224MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:05:33 np0005592159 nova_compute[226433]: 2026-01-22 14:05:33.472 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:05:33 np0005592159 nova_compute[226433]: 2026-01-22 14:05:33.472 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:05:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:33.682+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:33 np0005592159 nova_compute[226433]: 2026-01-22 14:05:33.832 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:05:33 np0005592159 nova_compute[226433]: 2026-01-22 14:05:33.832 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:05:33 np0005592159 nova_compute[226433]: 2026-01-22 14:05:33.832 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:05:33 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:33 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:33 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:34 np0005592159 nova_compute[226433]: 2026-01-22 14:05:34.056 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:05:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:05:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:34.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:05:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:34.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:05:34 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3068488076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:05:34 np0005592159 nova_compute[226433]: 2026-01-22 14:05:34.471 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:05:34 np0005592159 nova_compute[226433]: 2026-01-22 14:05:34.477 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:05:34 np0005592159 nova_compute[226433]: 2026-01-22 14:05:34.496 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:05:34 np0005592159 nova_compute[226433]: 2026-01-22 14:05:34.498 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:05:34 np0005592159 nova_compute[226433]: 2026-01-22 14:05:34.498 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.026s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:05:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:34.672+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:35 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:35 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1724 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:35 np0005592159 nova_compute[226433]: 2026-01-22 14:05:35.497 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:35 np0005592159 nova_compute[226433]: 2026-01-22 14:05:35.498 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:35 np0005592159 nova_compute[226433]: 2026-01-22 14:05:35.498 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:05:35 np0005592159 nova_compute[226433]: 2026-01-22 14:05:35.498 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:05:35 np0005592159 nova_compute[226433]: 2026-01-22 14:05:35.618 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:05:35 np0005592159 nova_compute[226433]: 2026-01-22 14:05:35.618 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:05:35 np0005592159 nova_compute[226433]: 2026-01-22 14:05:35.619 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:35 np0005592159 nova_compute[226433]: 2026-01-22 14:05:35.619 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:35 np0005592159 nova_compute[226433]: 2026-01-22 14:05:35.619 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:35 np0005592159 nova_compute[226433]: 2026-01-22 14:05:35.620 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:05:35 np0005592159 nova_compute[226433]: 2026-01-22 14:05:35.620 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:35 np0005592159 nova_compute[226433]: 2026-01-22 14:05:35.620 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 09:05:35 np0005592159 nova_compute[226433]: 2026-01-22 14:05:35.646 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 09:05:35 np0005592159 nova_compute[226433]: 2026-01-22 14:05:35.646 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:35.692+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:36.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:36.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:36 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:36.645+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:37 np0005592159 nova_compute[226433]: 2026-01-22 14:05:37.526 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:37 np0005592159 nova_compute[226433]: 2026-01-22 14:05:37.527 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 09:05:37 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:37 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:37.672+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:05:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:38.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:05:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:38.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:38 np0005592159 nova_compute[226433]: 2026-01-22 14:05:38.530 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:38.686+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:38 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:39.687+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:39 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:39 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1729 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:40.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:40.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:40.717+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:40 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:40 np0005592159 podman[232422]: 2026-01-22 14:05:40.997405113 +0000 UTC m=+0.057885378 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 09:05:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:41.699+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:41 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:42.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:42.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:42.664+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:43 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:43.688+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:44.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:44 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:44.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:44.662+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:45 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:45 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1734 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:45.687+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:05:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:46.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:05:46 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:46.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:46.716+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:05:47.174 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:05:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:05:47.175 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:05:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:05:47.175 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:05:47 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:47.745+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:48.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:48.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:48 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:48 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:48.697+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:49 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:49.714+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:50.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:50.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:50 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1739 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:50 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:50.715+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:50 np0005592159 nova_compute[226433]: 2026-01-22 14:05:50.937 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:05:51 np0005592159 nova_compute[226433]: 2026-01-22 14:05:51.022 226437 WARNING nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.#033[00m
Jan 22 09:05:51 np0005592159 nova_compute[226433]: 2026-01-22 14:05:51.022 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Triggering sync for uuid e0e74330-96df-479f-8baf-53fbd2ccba91 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 22 09:05:51 np0005592159 nova_compute[226433]: 2026-01-22 14:05:51.023 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "e0e74330-96df-479f-8baf-53fbd2ccba91" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:05:51 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:51.697+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:52.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:05:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:52.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:05:52 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:52.724+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:53 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:53.692+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:05:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:54.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:05:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:54.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:54.673+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:54 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:54 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1744 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:05:55 np0005592159 nova_compute[226433]: 2026-01-22 14:05:55.561 226437 DEBUG oslo_concurrency.lockutils [None req-4800287f-e66f-4013-8cd3-d4db81524aa2 0543a9d7720b47b580746e523aa51e97 87e683d63c47432aa4cffe28b42e8de7 - - default default] Acquiring lock "e0e74330-96df-479f-8baf-53fbd2ccba91" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:05:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:55.707+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:55 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:56.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:56.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:56.724+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:56 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:57.771+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:05:57 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:05:58 np0005592159 podman[232500]: 2026-01-22 14:05:58.007206191 +0000 UTC m=+0.069740831 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 09:05:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:05:58.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:05:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:05:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:05:58.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:05:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:58.802+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:05:58 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:05:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:05:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:05:59.789+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:05:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:05:59 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:05:59 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 1749 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:06:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:00.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:06:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:00.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:00.744+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:00 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:01.730+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:01 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:02.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:02.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:02.681+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:03 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:03.705+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:04.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:06:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:04.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:06:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:04.660+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:04 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:04 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:05.694+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 1754 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.830927) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765831067, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2725, "num_deletes": 506, "total_data_size": 5080915, "memory_usage": 5160128, "flush_reason": "Manual Compaction"}
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765864493, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 3276667, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29161, "largest_seqno": 31881, "table_properties": {"data_size": 3266581, "index_size": 5556, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 27698, "raw_average_key_size": 20, "raw_value_size": 3242643, "raw_average_value_size": 2389, "num_data_blocks": 243, "num_entries": 1357, "num_filter_entries": 1357, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090600, "oldest_key_time": 1769090600, "file_creation_time": 1769090765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 33620 microseconds, and 11595 cpu microseconds.
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.864569) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 3276667 bytes OK
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.864605) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.872539) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.872566) EVENT_LOG_v1 {"time_micros": 1769090765872560, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.872595) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 5067658, prev total WAL file size 5067658, number of live WAL files 2.
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.873998) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(3199KB)], [57(8703KB)]
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765874080, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 12189317, "oldest_snapshot_seqno": -1}
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 6971 keys, 10360264 bytes, temperature: kUnknown
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765956477, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 10360264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10315908, "index_size": 25812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17477, "raw_key_size": 183819, "raw_average_key_size": 26, "raw_value_size": 10190725, "raw_average_value_size": 1461, "num_data_blocks": 1022, "num_entries": 6971, "num_filter_entries": 6971, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090765, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.956936) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10360264 bytes
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.960011) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.8 rd, 125.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.5 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 8001, records dropped: 1030 output_compression: NoCompression
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.960035) EVENT_LOG_v1 {"time_micros": 1769090765960022, "job": 34, "event": "compaction_finished", "compaction_time_micros": 82457, "compaction_time_cpu_micros": 24422, "output_level": 6, "num_output_files": 1, "total_output_size": 10360264, "num_input_records": 8001, "num_output_records": 6971, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765960920, "job": 34, "event": "table_file_deletion", "file_number": 59}
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090765963007, "job": 34, "event": "table_file_deletion", "file_number": 57}
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.873859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.963086) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.963094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.963096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.963098) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:05 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:06:05.963099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:06:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:06.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:06.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:06.731+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:06 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:07.713+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:07 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:06:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:08.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:06:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:08.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:08.728+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:08 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:08 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:08 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:09.776+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:10.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:10 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:06:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:06:10 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 1759 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:10.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:10.738+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:11 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:11.691+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:12 np0005592159 podman[232713]: 2026-01-22 14:06:12.018236321 +0000 UTC m=+0.085136577 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:06:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:12.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:12.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:12 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:12.730+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:13.763+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:14.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:14 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:14 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:14.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:14.764+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:15 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:15 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 1764 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:15.740+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:16.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:16 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:16.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:16.765+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:17.727+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:17 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:06:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:18.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:18.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:18.679+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:18 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:18 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:19.661+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:20.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:20 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:20 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 1769 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:20.652+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:21 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:21.624+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:22.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:22.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:22.595+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:22 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:22 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:23.556+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:24 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:24.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:24.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:24.545+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:25 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:25 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 1774 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:25.525+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:26.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:26 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:26.569+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:27.560+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:27 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:28.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:28.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:28.610+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:28 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:06:28 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:29 np0005592159 podman[232841]: 2026-01-22 14:06:29.004002492 +0000 UTC m=+0.071161917 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:06:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:29.586+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:30 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 1779 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:30.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:30.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:30 np0005592159 nova_compute[226433]: 2026-01-22 14:06:30.602 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:30.625+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:06:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.5 total, 600.0 interval#012Cumulative writes: 5911 writes, 24K keys, 5911 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5911 writes, 1112 syncs, 5.32 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 722 writes, 1627 keys, 722 commit groups, 1.0 writes per commit group, ingest: 1.08 MB, 0.00 MB/s#012Interval WAL: 722 writes, 316 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:06:31 np0005592159 nova_compute[226433]: 2026-01-22 14:06:31.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:31 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:31 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:31.634+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:32.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:32.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:32 np0005592159 nova_compute[226433]: 2026-01-22 14:06:32.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:32.590+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:32 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:32 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:33 np0005592159 nova_compute[226433]: 2026-01-22 14:06:33.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:33 np0005592159 nova_compute[226433]: 2026-01-22 14:06:33.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:33.639+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:06:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:34.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:06:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:06:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:34.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:06:34 np0005592159 nova_compute[226433]: 2026-01-22 14:06:34.466 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:06:34 np0005592159 nova_compute[226433]: 2026-01-22 14:06:34.467 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:06:34 np0005592159 nova_compute[226433]: 2026-01-22 14:06:34.467 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:06:34 np0005592159 nova_compute[226433]: 2026-01-22 14:06:34.468 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:06:34 np0005592159 nova_compute[226433]: 2026-01-22 14:06:34.468 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:06:34 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:34.611+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:06:35 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/539268978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:06:35 np0005592159 nova_compute[226433]: 2026-01-22 14:06:35.204 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.735s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:06:35 np0005592159 nova_compute[226433]: 2026-01-22 14:06:35.389 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:06:35 np0005592159 nova_compute[226433]: 2026-01-22 14:06:35.391 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5188MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:06:35 np0005592159 nova_compute[226433]: 2026-01-22 14:06:35.391 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:06:35 np0005592159 nova_compute[226433]: 2026-01-22 14:06:35.391 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:06:35 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:35 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 1784 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:35 np0005592159 nova_compute[226433]: 2026-01-22 14:06:35.547 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:06:35 np0005592159 nova_compute[226433]: 2026-01-22 14:06:35.548 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:06:35 np0005592159 nova_compute[226433]: 2026-01-22 14:06:35.548 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:06:35 np0005592159 nova_compute[226433]: 2026-01-22 14:06:35.568 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing inventories for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 09:06:35 np0005592159 nova_compute[226433]: 2026-01-22 14:06:35.603 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating ProviderTree inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 09:06:35 np0005592159 nova_compute[226433]: 2026-01-22 14:06:35.604 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 09:06:35 np0005592159 nova_compute[226433]: 2026-01-22 14:06:35.622 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing aggregate associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 09:06:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:35.630+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:35 np0005592159 nova_compute[226433]: 2026-01-22 14:06:35.662 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing trait associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 09:06:35 np0005592159 nova_compute[226433]: 2026-01-22 14:06:35.708 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:06:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:36.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:06:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:36.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:06:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:06:36 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1569454389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:06:36 np0005592159 nova_compute[226433]: 2026-01-22 14:06:36.436 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.728s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:06:36 np0005592159 nova_compute[226433]: 2026-01-22 14:06:36.446 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:06:36 np0005592159 nova_compute[226433]: 2026-01-22 14:06:36.471 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:06:36 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:36 np0005592159 nova_compute[226433]: 2026-01-22 14:06:36.633 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:06:36 np0005592159 nova_compute[226433]: 2026-01-22 14:06:36.633 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:06:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:36.667+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:37.620+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:37 np0005592159 nova_compute[226433]: 2026-01-22 14:06:37.633 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:37 np0005592159 nova_compute[226433]: 2026-01-22 14:06:37.633 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:37 np0005592159 nova_compute[226433]: 2026-01-22 14:06:37.634 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:06:37 np0005592159 nova_compute[226433]: 2026-01-22 14:06:37.634 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:06:37 np0005592159 nova_compute[226433]: 2026-01-22 14:06:37.675 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:06:37 np0005592159 nova_compute[226433]: 2026-01-22 14:06:37.675 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:06:37 np0005592159 nova_compute[226433]: 2026-01-22 14:06:37.676 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:37 np0005592159 nova_compute[226433]: 2026-01-22 14:06:37.676 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:06:37 np0005592159 nova_compute[226433]: 2026-01-22 14:06:37.676 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:06:37 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:37 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:06:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:38.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:06:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:38.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:38.631+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:38 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:39.609+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:39 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:39 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 1789 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:40.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:40.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:40.608+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:40 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:41.633+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:06:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:42.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:06:42 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:42.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:42.648+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:42 np0005592159 podman[232920]: 2026-01-22 14:06:42.991244584 +0000 UTC m=+0.052336462 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 09:06:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:43.698+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:43 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:06:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:44.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:06:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:44.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:44.720+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:45 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:45 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:45 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 1794 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:45.724+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:46 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:46.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 09:06:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:46.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 09:06:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:46.729+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:06:47.175 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:06:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:06:47.176 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:06:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:06:47.176 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:06:47 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:06:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:47.774+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:48.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:48.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:48 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:48 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:48.776+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:49.776+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:50 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:50.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:06:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:50.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:06:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:50.781+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:50 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 1798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:50 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:51.811+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:06:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:52.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:06:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:52.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:52 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:52.778+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:53 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:53 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:53.759+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:54.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:54.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:54.734+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:54 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:54 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 1803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:06:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:06:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:55.753+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:55 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:56.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:56.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:56.762+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:56 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:57.811+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:58 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:58 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Jan 22 09:06:58 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 22 09:06:58 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 22 09:06:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:06:58.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:06:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:06:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:06:58.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:06:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:58.790+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:06:59 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:06:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:06:59.815+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:06:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:00 np0005592159 podman[232997]: 2026-01-22 14:07:00.078035448 +0000 UTC m=+0.128524561 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:07:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:00.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:00 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:07:00 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 1808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:00.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:00.853+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:07:01 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:01.867+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:02.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:02.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:02 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:07:02 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:02.887+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:03 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:03.900+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:04.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:04.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:04 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:04 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 1813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:04.938+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:05 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:05.908+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:06.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:06.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:06.941+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:07 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:07.925+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:08 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:08.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:08.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:08.879+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:09 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:09.897+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:10.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:10.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:10 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:10 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 1818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:10.855+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:11 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:11 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:11.815+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 22 09:07:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:12.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 22 09:07:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:12.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:12 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:12.789+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:13 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:13.826+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:14 np0005592159 podman[233081]: 2026-01-22 14:07:14.017736067 +0000 UTC m=+0.075008990 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:07:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:14.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:07:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:14.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:07:14 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:14.813+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:15 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 1823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:15 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:15.803+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:16.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:16.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:16 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:07:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:16.822+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:17.840+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:18 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:18.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:18.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:18.799+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:07:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:07:19 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:19 np0005592159 podman[233398]: 2026-01-22 14:07:19.336995834 +0000 UTC m=+0.057293003 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:07:19 np0005592159 podman[233398]: 2026-01-22 14:07:19.429668578 +0000 UTC m=+0.149965727 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:07:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:19.750+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:20 np0005592159 podman[233551]: 2026-01-22 14:07:20.08781249 +0000 UTC m=+0.056715717 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 09:07:20 np0005592159 podman[233551]: 2026-01-22 14:07:20.104742366 +0000 UTC m=+0.073645563 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 09:07:20 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:20 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 1828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:20.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:20 np0005592159 podman[233619]: 2026-01-22 14:07:20.339454878 +0000 UTC m=+0.049524527 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, distribution-scope=public, io.openshift.tags=Ceph keepalived, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., release=1793, description=keepalived for Ceph, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2)
Jan 22 09:07:20 np0005592159 podman[233619]: 2026-01-22 14:07:20.354577157 +0000 UTC m=+0.064646816 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, distribution-scope=public, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, version=2.2.4, architecture=x86_64)
Jan 22 09:07:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:20.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:20.711+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:21 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:07:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:07:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:21.733+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:22.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:22.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:22 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:22.717+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:23 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:23 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:23.734+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:24.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:24.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:24 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:24.762+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:25 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:25 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:25.736+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:26.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:26.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:26 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:26.699+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:27 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:27.703+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:28.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:28.663+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:29.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:29 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:07:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:29.669+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:30.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:30.680+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:30 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:30 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:31 np0005592159 podman[233886]: 2026-01-22 14:07:31.016607021 +0000 UTC m=+0.080172499 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 22 09:07:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:31.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:31 np0005592159 nova_compute[226433]: 2026-01-22 14:07:31.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:31 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:31 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:31.716+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:32.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:32 np0005592159 nova_compute[226433]: 2026-01-22 14:07:32.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:32.763+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:32 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:33.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:33 np0005592159 nova_compute[226433]: 2026-01-22 14:07:33.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:33 np0005592159 nova_compute[226433]: 2026-01-22 14:07:33.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:33.777+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:33 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:34.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:34.818+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:34 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:34 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:35.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:35 np0005592159 nova_compute[226433]: 2026-01-22 14:07:35.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:35 np0005592159 nova_compute[226433]: 2026-01-22 14:07:35.614 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:07:35 np0005592159 nova_compute[226433]: 2026-01-22 14:07:35.615 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:07:35 np0005592159 nova_compute[226433]: 2026-01-22 14:07:35.615 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:07:35 np0005592159 nova_compute[226433]: 2026-01-22 14:07:35.616 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:07:35 np0005592159 nova_compute[226433]: 2026-01-22 14:07:35.616 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:07:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:35.846+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:35 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:07:36 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3109525077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:07:36 np0005592159 nova_compute[226433]: 2026-01-22 14:07:36.088 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:07:36 np0005592159 nova_compute[226433]: 2026-01-22 14:07:36.279 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:07:36 np0005592159 nova_compute[226433]: 2026-01-22 14:07:36.281 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5200MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:07:36 np0005592159 nova_compute[226433]: 2026-01-22 14:07:36.281 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:07:36 np0005592159 nova_compute[226433]: 2026-01-22 14:07:36.281 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:07:36 np0005592159 nova_compute[226433]: 2026-01-22 14:07:36.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:07:36 np0005592159 nova_compute[226433]: 2026-01-22 14:07:36.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:07:36 np0005592159 nova_compute[226433]: 2026-01-22 14:07:36.391 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:07:36 np0005592159 nova_compute[226433]: 2026-01-22 14:07:36.460 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:07:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:36.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:36.827+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:07:36 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/237134865' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:07:36 np0005592159 nova_compute[226433]: 2026-01-22 14:07:36.937 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:07:36 np0005592159 nova_compute[226433]: 2026-01-22 14:07:36.943 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:07:36 np0005592159 nova_compute[226433]: 2026-01-22 14:07:36.960 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:07:36 np0005592159 nova_compute[226433]: 2026-01-22 14:07:36.961 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:07:36 np0005592159 nova_compute[226433]: 2026-01-22 14:07:36.962 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:07:37 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:37.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:37.783+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:37 np0005592159 nova_compute[226433]: 2026-01-22 14:07:37.962 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:37 np0005592159 nova_compute[226433]: 2026-01-22 14:07:37.963 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:37 np0005592159 nova_compute[226433]: 2026-01-22 14:07:37.964 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:07:37 np0005592159 nova_compute[226433]: 2026-01-22 14:07:37.964 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:07:37 np0005592159 nova_compute[226433]: 2026-01-22 14:07:37.999 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:07:38 np0005592159 nova_compute[226433]: 2026-01-22 14:07:37.999 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:07:38 np0005592159 nova_compute[226433]: 2026-01-22 14:07:38.000 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:38 np0005592159 nova_compute[226433]: 2026-01-22 14:07:38.000 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:38 np0005592159 nova_compute[226433]: 2026-01-22 14:07:38.000 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:07:38 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:38.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:38.799+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:39.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:39 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:39 np0005592159 nova_compute[226433]: 2026-01-22 14:07:39.549 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:07:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:39.815+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:40.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:40 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:40 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:40 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:40.854+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:41.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:41 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:41.886+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:42.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:42 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:42.857+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:43.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:43 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:43.883+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:44.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:44 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:44 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:44.907+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:44 np0005592159 podman[233966]: 2026-01-22 14:07:44.994207057 +0000 UTC m=+0.054404862 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 09:07:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:45.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:45.907+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:45 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:46.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:46.871+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:46 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:07:47.176 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:07:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:07:47.177 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:07:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:07:47.177 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:07:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:47.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:47.874+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:47 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:48.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:48.922+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:49 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:49.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:49.931+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:50 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:50 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:50.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:50.921+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:51 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:51.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:51.924+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:52 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:52.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:52.914+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:53 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:53.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:53.901+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:54.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:54 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:54.887+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:55.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:55 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:55 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:07:55 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:55.848+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:07:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:56.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:56 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:56.856+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:57.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:57 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:57.808+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:07:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:07:58.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:07:58 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:58.853+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:07:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:07:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:07:59.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:07:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:07:59.825+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:07:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:59 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:07:59 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:00.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:00.845+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:00 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:01.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:01.798+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:02 np0005592159 podman[234043]: 2026-01-22 14:08:02.049605964 +0000 UTC m=+0.114937414 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 09:08:02 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:02.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:02.848+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:03.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:03 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:03.897+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:04.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:04 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:04 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:04.860+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:05.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:05 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:05 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:05.904+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:06.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:06 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:06.913+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:07.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:07.937+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:08 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:08.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:08.889+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:09 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:09.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:09.929+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:10.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:10.883+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:11 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:11 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:11.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:11.869+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:12 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:12 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:12.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:12.860+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:13 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:13.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:13.889+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:14 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:14.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:14.896+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:15.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:15 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:15 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:15.920+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:15 np0005592159 podman[234127]: 2026-01-22 14:08:15.989463214 +0000 UTC m=+0.050759045 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 09:08:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:16.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:16 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:16 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:16.940+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:17.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:17 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:17.959+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:18.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:18 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:19.000+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:19.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:19 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:19 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:20.040+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:20.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:20 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:21.069+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:21.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:21 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:22.052+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:22.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:23.005+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:23 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:23.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:24.006+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:24 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:24.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:25.009+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:25 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:25 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:25.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:26.034+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:26 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:26.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:27.036+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:27 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:27.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:28.017+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:28 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:28.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:29.025+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:29 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:30.060+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:30.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:30 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:30 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:30.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:31.096+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:31 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:31 np0005592159 podman[234478]: 2026-01-22 14:08:31.653756944 +0000 UTC m=+0.039536741 container create fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 09:08:31 np0005592159 systemd[1]: Started libpod-conmon-fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639.scope.
Jan 22 09:08:31 np0005592159 systemd[1]: Started libcrun container.
Jan 22 09:08:31 np0005592159 podman[234478]: 2026-01-22 14:08:31.635605307 +0000 UTC m=+0.021385124 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:08:31 np0005592159 podman[234478]: 2026-01-22 14:08:31.731627132 +0000 UTC m=+0.117406949 container init fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Jan 22 09:08:31 np0005592159 podman[234478]: 2026-01-22 14:08:31.738236805 +0000 UTC m=+0.124016603 container start fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:08:31 np0005592159 podman[234478]: 2026-01-22 14:08:31.741223164 +0000 UTC m=+0.127002961 container attach fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:08:31 np0005592159 systemd[1]: libpod-fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639.scope: Deactivated successfully.
Jan 22 09:08:31 np0005592159 pensive_mccarthy[234495]: 167 167
Jan 22 09:08:31 np0005592159 conmon[234495]: conmon fc517f37329d627da191 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639.scope/container/memory.events
Jan 22 09:08:31 np0005592159 podman[234478]: 2026-01-22 14:08:31.745175618 +0000 UTC m=+0.130955415 container died fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:08:31 np0005592159 systemd[1]: var-lib-containers-storage-overlay-4eef4f06ca6b17b1ba01b4dd5148ff7cc37b70c682576d81bd21dc586105a325-merged.mount: Deactivated successfully.
Jan 22 09:08:31 np0005592159 podman[234478]: 2026-01-22 14:08:31.786398132 +0000 UTC m=+0.172177959 container remove fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mccarthy, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:08:31 np0005592159 systemd[1]: libpod-conmon-fc517f37329d627da19159669413d7e53c900f971369229fecadf1708c46f639.scope: Deactivated successfully.
Jan 22 09:08:31 np0005592159 podman[234518]: 2026-01-22 14:08:31.986083703 +0000 UTC m=+0.052642765 container create 23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:08:32 np0005592159 systemd[1]: Started libpod-conmon-23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb.scope.
Jan 22 09:08:32 np0005592159 systemd[1]: Started libcrun container.
Jan 22 09:08:32 np0005592159 podman[234518]: 2026-01-22 14:08:31.96924953 +0000 UTC m=+0.035808622 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:08:32 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2b9dea5c2da9533a5641e7103fd3ae30764a14cd501f50608a8a55e523565a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:32 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2b9dea5c2da9533a5641e7103fd3ae30764a14cd501f50608a8a55e523565a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:32 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2b9dea5c2da9533a5641e7103fd3ae30764a14cd501f50608a8a55e523565a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:32 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da2b9dea5c2da9533a5641e7103fd3ae30764a14cd501f50608a8a55e523565a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:08:32 np0005592159 podman[234518]: 2026-01-22 14:08:32.083244348 +0000 UTC m=+0.149803440 container init 23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:08:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:32.087+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:32 np0005592159 podman[234518]: 2026-01-22 14:08:32.090466608 +0000 UTC m=+0.157025700 container start 23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Jan 22 09:08:32 np0005592159 podman[234518]: 2026-01-22 14:08:32.094968756 +0000 UTC m=+0.161527908 container attach 23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cerf, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 22 09:08:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:32.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:32 np0005592159 podman[234537]: 2026-01-22 14:08:32.166519868 +0000 UTC m=+0.087968065 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:08:32 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:32 np0005592159 nova_compute[226433]: 2026-01-22 14:08:32.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:32 np0005592159 nova_compute[226433]: 2026-01-22 14:08:32.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:32.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:33.083+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:33 np0005592159 charming_cerf[234534]: [
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:    {
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:        "available": false,
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:        "ceph_device": false,
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:        "lsm_data": {},
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:        "lvs": [],
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:        "path": "/dev/sr0",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:        "rejected_reasons": [
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "Insufficient space (<5GB)",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "Has a FileSystem"
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:        ],
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:        "sys_api": {
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "actuators": null,
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "device_nodes": "sr0",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "devname": "sr0",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "human_readable_size": "482.00 KB",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "id_bus": "ata",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "model": "QEMU DVD-ROM",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "nr_requests": "2",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "parent": "/dev/sr0",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "partitions": {},
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "path": "/dev/sr0",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "removable": "1",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "rev": "2.5+",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "ro": "0",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "rotational": "1",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "sas_address": "",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "sas_device_handle": "",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "scheduler_mode": "mq-deadline",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "sectors": 0,
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "sectorsize": "2048",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "size": 493568.0,
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "support_discard": "2048",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "type": "disk",
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:            "vendor": "QEMU"
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:        }
Jan 22 09:08:33 np0005592159 charming_cerf[234534]:    }
Jan 22 09:08:33 np0005592159 charming_cerf[234534]: ]
Jan 22 09:08:33 np0005592159 systemd[1]: libpod-23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb.scope: Deactivated successfully.
Jan 22 09:08:33 np0005592159 podman[234518]: 2026-01-22 14:08:33.221330814 +0000 UTC m=+1.287889886 container died 23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cerf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:08:33 np0005592159 systemd[1]: libpod-23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb.scope: Consumed 1.139s CPU time.
Jan 22 09:08:33 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:33 np0005592159 systemd[1]: var-lib-containers-storage-overlay-da2b9dea5c2da9533a5641e7103fd3ae30764a14cd501f50608a8a55e523565a-merged.mount: Deactivated successfully.
Jan 22 09:08:33 np0005592159 podman[234518]: 2026-01-22 14:08:33.271631597 +0000 UTC m=+1.338190669 container remove 23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 22 09:08:33 np0005592159 systemd[1]: libpod-conmon-23c7f4c6a68c0014f65ede66420658a0658b30dfe4f74339ac9a5d0263e6e7cb.scope: Deactivated successfully.
Jan 22 09:08:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:34.128+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:34.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:34 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:08:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:08:34 np0005592159 nova_compute[226433]: 2026-01-22 14:08:34.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:34 np0005592159 nova_compute[226433]: 2026-01-22 14:08:34.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:34.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:35.107+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:35 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:35 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:36.059+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:36.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:36 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:36 np0005592159 nova_compute[226433]: 2026-01-22 14:08:36.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:36 np0005592159 nova_compute[226433]: 2026-01-22 14:08:36.515 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:08:36 np0005592159 nova_compute[226433]: 2026-01-22 14:08:36.515 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:08:36 np0005592159 nova_compute[226433]: 2026-01-22 14:08:36.532 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:08:36 np0005592159 nova_compute[226433]: 2026-01-22 14:08:36.532 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:08:36 np0005592159 nova_compute[226433]: 2026-01-22 14:08:36.533 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:36 np0005592159 nova_compute[226433]: 2026-01-22 14:08:36.533 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:08:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:36.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:37.028+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:37 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:37 np0005592159 nova_compute[226433]: 2026-01-22 14:08:37.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:37 np0005592159 nova_compute[226433]: 2026-01-22 14:08:37.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:37 np0005592159 nova_compute[226433]: 2026-01-22 14:08:37.538 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:08:37 np0005592159 nova_compute[226433]: 2026-01-22 14:08:37.540 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:08:37 np0005592159 nova_compute[226433]: 2026-01-22 14:08:37.540 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:08:37 np0005592159 nova_compute[226433]: 2026-01-22 14:08:37.540 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:08:37 np0005592159 nova_compute[226433]: 2026-01-22 14:08:37.541 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:08:37 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:08:37 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3884516686' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:08:37 np0005592159 nova_compute[226433]: 2026-01-22 14:08:37.956 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:08:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:38.010+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:38 np0005592159 nova_compute[226433]: 2026-01-22 14:08:38.094 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:08:38 np0005592159 nova_compute[226433]: 2026-01-22 14:08:38.095 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5192MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:08:38 np0005592159 nova_compute[226433]: 2026-01-22 14:08:38.095 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:08:38 np0005592159 nova_compute[226433]: 2026-01-22 14:08:38.096 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:08:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:38.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:38 np0005592159 nova_compute[226433]: 2026-01-22 14:08:38.171 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:08:38 np0005592159 nova_compute[226433]: 2026-01-22 14:08:38.172 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:08:38 np0005592159 nova_compute[226433]: 2026-01-22 14:08:38.172 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:08:38 np0005592159 nova_compute[226433]: 2026-01-22 14:08:38.205 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:08:38 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:38.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:08:38 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1175285928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:08:38 np0005592159 nova_compute[226433]: 2026-01-22 14:08:38.660 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:08:38 np0005592159 nova_compute[226433]: 2026-01-22 14:08:38.665 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:08:38 np0005592159 nova_compute[226433]: 2026-01-22 14:08:38.685 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:08:38 np0005592159 nova_compute[226433]: 2026-01-22 14:08:38.686 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:08:38 np0005592159 nova_compute[226433]: 2026-01-22 14:08:38.686 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:08:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:38.994+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:39 np0005592159 nova_compute[226433]: 2026-01-22 14:08:39.681 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:08:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:39.958+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:39.971729) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090919971775, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2547, "num_deletes": 510, "total_data_size": 4639115, "memory_usage": 4704960, "flush_reason": "Manual Compaction"}
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090919996515, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 2305866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31886, "largest_seqno": 34428, "table_properties": {"data_size": 2297650, "index_size": 4006, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3141, "raw_key_size": 26045, "raw_average_key_size": 20, "raw_value_size": 2276708, "raw_average_value_size": 1819, "num_data_blocks": 172, "num_entries": 1251, "num_filter_entries": 1251, "num_deletions": 510, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090766, "oldest_key_time": 1769090766, "file_creation_time": 1769090919, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 24926 microseconds, and 9983 cpu microseconds.
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:39.996646) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 2305866 bytes OK
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:39.996681) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:39.999072) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:39.999102) EVENT_LOG_v1 {"time_micros": 1769090919999092, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:39.999130) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:08:39 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4626572, prev total WAL file size 4688192, number of live WAL files 2.
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.001438) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303033' seq:72057594037927935, type:22 .. '6C6F676D0031323538' seq:0, type:0; will stop at (end)
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(2251KB)], [60(10117KB)]
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920001488, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 12666130, "oldest_snapshot_seqno": -1}
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 7229 keys, 9297660 bytes, temperature: kUnknown
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920079631, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 9297660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9254164, "index_size": 24312, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18117, "raw_key_size": 191649, "raw_average_key_size": 26, "raw_value_size": 9126960, "raw_average_value_size": 1262, "num_data_blocks": 948, "num_entries": 7229, "num_filter_entries": 7229, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090920, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.080012) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 9297660 bytes
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.085461) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.8 rd, 118.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.9 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(9.5) write-amplify(4.0) OK, records in: 8222, records dropped: 993 output_compression: NoCompression
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.085493) EVENT_LOG_v1 {"time_micros": 1769090920085479, "job": 36, "event": "compaction_finished", "compaction_time_micros": 78278, "compaction_time_cpu_micros": 20842, "output_level": 6, "num_output_files": 1, "total_output_size": 9297660, "num_input_records": 8222, "num_output_records": 7229, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920086488, "job": 36, "event": "table_file_deletion", "file_number": 62}
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090920090392, "job": 36, "event": "table_file_deletion", "file_number": 60}
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.001277) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.090459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.090467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.090470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.090474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:40.090478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:40.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:40 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:08:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:40.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:40.961+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:41 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:41.986+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:42.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:42.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:42 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:42.959+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:43 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:43 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:43.922+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:44.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:44.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:44 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:44.897+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:45 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:45 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:45.858+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:46.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:46.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:46 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:46.859+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:47 np0005592159 podman[235957]: 2026-01-22 14:08:47.005281656 +0000 UTC m=+0.059516936 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 09:08:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:08:47.178 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:08:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:08:47.178 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:08:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:08:47.179 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:08:47 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:47.865+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:48.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:48.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:48 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:48.889+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:49.868+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:50 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:50 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:50.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:50.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:50.877+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:51 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:51.897+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:52 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:52.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:52.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:52.876+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #64. Immutable memtables: 0.
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.123328) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 64
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933123491, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 432, "num_deletes": 251, "total_data_size": 408537, "memory_usage": 417752, "flush_reason": "Manual Compaction"}
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #65: started
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933127349, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 65, "file_size": 268216, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34433, "largest_seqno": 34860, "table_properties": {"data_size": 265866, "index_size": 450, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6293, "raw_average_key_size": 19, "raw_value_size": 260953, "raw_average_value_size": 795, "num_data_blocks": 20, "num_entries": 328, "num_filter_entries": 328, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090919, "oldest_key_time": 1769090919, "file_creation_time": 1769090933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 4046 microseconds, and 1219 cpu microseconds.
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.127375) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #65: 268216 bytes OK
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.127388) [db/memtable_list.cc:519] [default] Level-0 commit table #65 started
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.128906) [db/memtable_list.cc:722] [default] Level-0 commit table #65: memtable #1 done
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.128920) EVENT_LOG_v1 {"time_micros": 1769090933128916, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.128934) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 405790, prev total WAL file size 405790, number of live WAL files 2.
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000061.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.129281) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [65(261KB)], [63(9079KB)]
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933129335, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [65], "files_L6": [63], "score": -1, "input_data_size": 9565876, "oldest_snapshot_seqno": -1}
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #66: 7045 keys, 7847251 bytes, temperature: kUnknown
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933186372, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 66, "file_size": 7847251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7806160, "index_size": 22355, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17669, "raw_key_size": 188614, "raw_average_key_size": 26, "raw_value_size": 7683168, "raw_average_value_size": 1090, "num_data_blocks": 861, "num_entries": 7045, "num_filter_entries": 7045, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769090933, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 66, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.186617) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 7847251 bytes
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.188143) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.5 rd, 137.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 8.9 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(64.9) write-amplify(29.3) OK, records in: 7557, records dropped: 512 output_compression: NoCompression
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.188164) EVENT_LOG_v1 {"time_micros": 1769090933188154, "job": 38, "event": "compaction_finished", "compaction_time_micros": 57114, "compaction_time_cpu_micros": 20386, "output_level": 6, "num_output_files": 1, "total_output_size": 7847251, "num_input_records": 7557, "num_output_records": 7045, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933188384, "job": 38, "event": "table_file_deletion", "file_number": 65}
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000063.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769090933190282, "job": 38, "event": "table_file_deletion", "file_number": 63}
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.129240) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.190389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.190395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.190398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.190399) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:08:53.190402) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:08:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:53.885+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:54 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:54.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:54.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:54.932+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:55 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:55 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1923 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:08:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:55.972+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:08:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:56.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:56 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:08:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:56.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:08:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:57.016+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:57 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:57 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:58.041+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:08:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:08:58.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:08:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:08:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:08:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:08:58.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:08:58 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:08:59.066+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:08:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:59 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:08:59 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:00.085+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:00.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:09:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:00.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:09:00 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:01.041+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:01 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:02.001+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:02.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:02.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:02 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:02.977+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:03 np0005592159 podman[236037]: 2026-01-22 14:09:03.068684638 +0000 UTC m=+0.129106276 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:09:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:03.960+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:09:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:04.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:09:04 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:04.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:05.005+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:05 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:05 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:06.037+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:06.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:06 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:06.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:07.028+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:07 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:08.057+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:08.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:08 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:08.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:09.016+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:09 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:10.059+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:10.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:10 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:10 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1937 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:10.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:11.104+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:11 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:12.119+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:09:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:12.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:09:12 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:09:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:12.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:09:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:13.132+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:13 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:14.154+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:14.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:14 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:14 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:14.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:15.106+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:15 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1942 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:15 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:16.074+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:16.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:16 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:16.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:17.112+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:17 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:18 np0005592159 podman[236121]: 2026-01-22 14:09:18.025586985 +0000 UTC m=+0.087949893 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 09:09:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:09:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1899736290' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:09:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:09:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1899736290' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:09:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:18.142+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:18.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:18.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:18 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:19.172+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:19 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:20.184+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:20.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:09:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:20.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:09:20 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1947 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:20 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:21.234+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:21 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:22.193+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:22.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:22.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:22 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:23.187+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:23 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:24.200+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:24.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:24.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:24 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:24 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1952 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:25.168+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:25 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:26.139+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:26.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:26.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:27 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:27.140+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:28 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:28.132+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:28.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:28.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:29 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:29.178+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:30 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:30 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:30.197+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:30.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:30.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:31 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:31.216+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:32.212+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:32 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:32.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:32 np0005592159 nova_compute[226433]: 2026-01-22 14:09:32.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:32.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:33.235+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:33 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:34 np0005592159 podman[236200]: 2026-01-22 14:09:34.013918544 +0000 UTC m=+0.074509852 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:09:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:34.195+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:34.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:34 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:34 np0005592159 nova_compute[226433]: 2026-01-22 14:09:34.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:34 np0005592159 nova_compute[226433]: 2026-01-22 14:09:34.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:34.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:35.201+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:35 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:35 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1962 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:36.205+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:36.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:36 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:36 np0005592159 nova_compute[226433]: 2026-01-22 14:09:36.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:36 np0005592159 nova_compute[226433]: 2026-01-22 14:09:36.515 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:09:36 np0005592159 nova_compute[226433]: 2026-01-22 14:09:36.515 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:09:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:36.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:36 np0005592159 nova_compute[226433]: 2026-01-22 14:09:36.870 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:09:36 np0005592159 nova_compute[226433]: 2026-01-22 14:09:36.871 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:09:36 np0005592159 nova_compute[226433]: 2026-01-22 14:09:36.871 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:36 np0005592159 nova_compute[226433]: 2026-01-22 14:09:36.871 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:36 np0005592159 nova_compute[226433]: 2026-01-22 14:09:36.871 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:09:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:37.185+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:37 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:37 np0005592159 nova_compute[226433]: 2026-01-22 14:09:37.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:37 np0005592159 nova_compute[226433]: 2026-01-22 14:09:37.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:09:37 np0005592159 nova_compute[226433]: 2026-01-22 14:09:37.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:09:37 np0005592159 nova_compute[226433]: 2026-01-22 14:09:37.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:09:37 np0005592159 nova_compute[226433]: 2026-01-22 14:09:37.541 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:09:37 np0005592159 nova_compute[226433]: 2026-01-22 14:09:37.541 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:09:37 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:09:37 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1465988269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:09:37 np0005592159 nova_compute[226433]: 2026-01-22 14:09:37.954 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:09:38 np0005592159 nova_compute[226433]: 2026-01-22 14:09:38.145 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:09:38 np0005592159 nova_compute[226433]: 2026-01-22 14:09:38.146 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5192MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:09:38 np0005592159 nova_compute[226433]: 2026-01-22 14:09:38.146 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:09:38 np0005592159 nova_compute[226433]: 2026-01-22 14:09:38.146 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:09:38 np0005592159 nova_compute[226433]: 2026-01-22 14:09:38.221 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:09:38 np0005592159 nova_compute[226433]: 2026-01-22 14:09:38.222 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:09:38 np0005592159 nova_compute[226433]: 2026-01-22 14:09:38.222 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=20GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:09:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:38.235+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:38.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:38 np0005592159 nova_compute[226433]: 2026-01-22 14:09:38.268 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:09:38 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:38.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:09:38 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1883155954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:09:38 np0005592159 nova_compute[226433]: 2026-01-22 14:09:38.695 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:09:38 np0005592159 nova_compute[226433]: 2026-01-22 14:09:38.700 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:09:38 np0005592159 nova_compute[226433]: 2026-01-22 14:09:38.715 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:09:38 np0005592159 nova_compute[226433]: 2026-01-22 14:09:38.717 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:09:38 np0005592159 nova_compute[226433]: 2026-01-22 14:09:38.717 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:09:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:39.205+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:39 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:39 np0005592159 nova_compute[226433]: 2026-01-22 14:09:39.713 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:39 np0005592159 nova_compute[226433]: 2026-01-22 14:09:39.714 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:40.255+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:40.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:40.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:40 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:40 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1967 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:40 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:41.221+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:41 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:09:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:09:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:09:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:42.255+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:42.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:42.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:42 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:43.207+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:43 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:44.173+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:44.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:44 np0005592159 nova_compute[226433]: 2026-01-22 14:09:44.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:09:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:44.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:44 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:45.194+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:45 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1972 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:45 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:46.204+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:46.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:46.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:46 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:09:47.179 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:09:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:09:47.180 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:09:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:09:47.180 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:09:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:47.252+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:47 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:48.266+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:48.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:48.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:48 np0005592159 podman[236486]: 2026-01-22 14:09:48.737251241 +0000 UTC m=+0.062092484 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 09:09:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:09:48 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:09:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:49.312+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:50 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:50 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1977 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:50.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:50.276+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:50.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:50 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:51.268+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:52 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:52.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:52.308+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:52.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:53.264+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:53 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:54.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:54.283+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:54 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:54.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:55.266+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:55 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:55 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1982 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:09:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:09:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:56.276+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:09:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:56.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:09:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:56.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:56 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:57.259+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:57 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:57 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:09:58.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:58.290+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:09:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:09:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:09:58.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:09:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:09:59.340+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:09:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:09:59 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:10:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:00.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:10:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:00.339+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:00.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:01.332+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:01 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:01 np0005592159 ceph-mon[77081]: Health detail: HEALTH_WARN 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops
Jan 22 09:10:01 np0005592159 ceph-mon[77081]: [WRN] SLOW_OPS: 12 slow ops, oldest one blocked for 1987 sec, osd.2 has slow ops
Jan 22 09:10:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:02.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:02.358+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:02.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:02 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:02 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:02 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:03.382+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:04.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:04 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:04 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:04.426+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:04.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:05 np0005592159 podman[236542]: 2026-01-22 14:10:05.041553316 +0000 UTC m=+0.099653128 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:10:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:05.414+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:05 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:05 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:06.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:06 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:06.411+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:06.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:07 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:07.411+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:08.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:08 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:08.452+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:08.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:09 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:09.430+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:09 np0005592159 nova_compute[226433]: 2026-01-22 14:10:09.923 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:09 np0005592159 nova_compute[226433]: 2026-01-22 14:10:09.924 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:09 np0005592159 nova_compute[226433]: 2026-01-22 14:10:09.951 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:10:09 np0005592159 nova_compute[226433]: 2026-01-22 14:10:09.994 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:09 np0005592159 nova_compute[226433]: 2026-01-22 14:10:09.994 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.031 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.061 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.062 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.069 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.069 226437 INFO nova.compute.claims [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Claim successful on node compute-2.ctlplane.example.com#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.143 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.265 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:10.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:10.404+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:10 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:10 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 1997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:10.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:10:10 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1847141595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.732 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.740 226437 DEBUG nova.compute.provider_tree [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.762 226437 DEBUG nova.scheduler.client.report [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.788 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.790 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.797 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.805 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.806 226437 INFO nova.compute.claims [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Claim successful on node compute-2.ctlplane.example.com#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.847 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.848 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.908 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:10:10 np0005592159 nova_compute[226433]: 2026-01-22 14:10:10.945 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.059 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.091 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.093 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.094 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Creating image(s)#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.126 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.153 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.185 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.192 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.241 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Automatically allocating a network for project e6c399bf43074b81b45ca1d976cb2b18. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Jan 22 09:10:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:11.441+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:10:11 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2985836776' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.494 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.501 226437 DEBUG nova.compute.provider_tree [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.519 226437 DEBUG nova.scheduler.client.report [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.545 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.546 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:10:11 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.567 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.568 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.569 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.569 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.599 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.602 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.628 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.629 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.659 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.677 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.825 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.827 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.828 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Creating image(s)#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.867 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.900 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.923 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.927 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.959 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.994 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.995 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.995 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:11 np0005592159 nova_compute[226433]: 2026-01-22 14:10:11.996 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.018 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.021 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 2314cf64-76a5-4383-8f2e-58228261f71b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.085 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] resizing rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.243 226437 DEBUG nova.objects.instance [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'migration_context' on Instance uuid 0c72e43b-d26a-47b8-ab7d-739190e552a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.268 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.269 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Ensure instance console log exists: /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.269 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.270 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.270 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.299 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 2314cf64-76a5-4383-8f2e-58228261f71b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:12.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.364 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] resizing rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 22 09:10:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:12.439+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.458 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Automatically allocating a network for project e6c399bf43074b81b45ca1d976cb2b18. _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2460#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.467 226437 DEBUG nova.objects.instance [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'migration_context' on Instance uuid 2314cf64-76a5-4383-8f2e-58228261f71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:10:12 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:12 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.572 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.572 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Ensure instance console log exists: /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.573 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.573 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:12 np0005592159 nova_compute[226433]: 2026-01-22 14:10:12.573 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:12.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:12 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:12.851 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:10:12 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:12.853 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:10:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:13.442+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:13 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:14.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:14.471+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:14 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:14.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:15.446+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:15 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2002 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:15 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:16.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:16.490+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:16 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:16.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:17.534+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:18.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:18.553+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:18 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:18.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:18 np0005592159 podman[237002]: 2026-01-22 14:10:18.994332325 +0000 UTC m=+0.052203112 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 09:10:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:19.593+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:19 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:20.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:20.631+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:20.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:20 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:20 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:20 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:20.855 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:21.650+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:21 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:22.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:22.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:22.658+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:23.621+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:24 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:24 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:10:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:24.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:10:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:24.573+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:24.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:25 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:25 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:25.615+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:26.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:26.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:26.657+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:27 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:27.680+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:27 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:27 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:28.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:28.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:28.677+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:28 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:29.724+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:30.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:30 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:30.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:30.756+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:31 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:31 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:31.733+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:10:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:32.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:10:32 np0005592159 nova_compute[226433]: 2026-01-22 14:10:32.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:32 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:32.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:32.730+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:33 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:33.757+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:34.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:34 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:34.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:34.743+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:35 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:35 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:35 np0005592159 nova_compute[226433]: 2026-01-22 14:10:35.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:35 np0005592159 nova_compute[226433]: 2026-01-22 14:10:35.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 09:10:35 np0005592159 nova_compute[226433]: 2026-01-22 14:10:35.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 09:10:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:35.738+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:36 np0005592159 podman[237080]: 2026-01-22 14:10:36.014394571 +0000 UTC m=+0.081819756 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 09:10:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:36 np0005592159 nova_compute[226433]: 2026-01-22 14:10:36.225 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Automatically allocated network: {'id': '18c81f01-33be-49a1-a179-aecc87794f99', 'name': 'auto_allocated_network', 'tenant_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['41485253-d693-4726-824d-ace746b534e1', '9c3d77fd-5c90-4745-9c8a-c335ad8bf441'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2026-01-22T14:10:12Z', 'updated_at': '2026-01-22T14:10:26Z', 'revision_number': 4, 'project_id': 'e6c399bf43074b81b45ca1d976cb2b18'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Jan 22 09:10:36 np0005592159 nova_compute[226433]: 2026-01-22 14:10:36.226 226437 DEBUG nova.policy [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fd58a5335a8745f1b3ce1bd9a0439003', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 22 09:10:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:36.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:36 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:36 np0005592159 nova_compute[226433]: 2026-01-22 14:10:36.543 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:36 np0005592159 nova_compute[226433]: 2026-01-22 14:10:36.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:10:36 np0005592159 nova_compute[226433]: 2026-01-22 14:10:36.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:10:36 np0005592159 nova_compute[226433]: 2026-01-22 14:10:36.565 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:10:36 np0005592159 nova_compute[226433]: 2026-01-22 14:10:36.566 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:10:36 np0005592159 nova_compute[226433]: 2026-01-22 14:10:36.566 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:10:36 np0005592159 nova_compute[226433]: 2026-01-22 14:10:36.566 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:10:36 np0005592159 nova_compute[226433]: 2026-01-22 14:10:36.566 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:36 np0005592159 nova_compute[226433]: 2026-01-22 14:10:36.567 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:36 np0005592159 nova_compute[226433]: 2026-01-22 14:10:36.567 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:36.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:36.761+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:37 np0005592159 nova_compute[226433]: 2026-01-22 14:10:37.249 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Automatically allocated network: {'id': '18c81f01-33be-49a1-a179-aecc87794f99', 'name': 'auto_allocated_network', 'tenant_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'admin_state_up': True, 'mtu': 1442, 'status': 'ACTIVE', 'subnets': ['41485253-d693-4726-824d-ace746b534e1', '9c3d77fd-5c90-4745-9c8a-c335ad8bf441'], 'shared': False, 'availability_zone_hints': [], 'availability_zones': [], 'ipv4_address_scope': None, 'ipv6_address_scope': None, 'router:external': False, 'description': '', 'qos_policy_id': None, 'port_security_enabled': True, 'dns_domain': '', 'l2_adjacency': True, 'tags': [], 'created_at': '2026-01-22T14:10:12Z', 'updated_at': '2026-01-22T14:10:26Z', 'revision_number': 4, 'project_id': 'e6c399bf43074b81b45ca1d976cb2b18'} _auto_allocate_network /usr/lib/python3.9/site-packages/nova/network/neutron.py:2478#033[00m
Jan 22 09:10:37 np0005592159 nova_compute[226433]: 2026-01-22 14:10:37.250 226437 DEBUG nova.policy [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fd58a5335a8745f1b3ce1bd9a0439003', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 22 09:10:37 np0005592159 nova_compute[226433]: 2026-01-22 14:10:37.424 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Successfully created port: 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 22 09:10:37 np0005592159 nova_compute[226433]: 2026-01-22 14:10:37.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:37 np0005592159 nova_compute[226433]: 2026-01-22 14:10:37.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:10:37 np0005592159 nova_compute[226433]: 2026-01-22 14:10:37.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:37 np0005592159 nova_compute[226433]: 2026-01-22 14:10:37.547 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:37 np0005592159 nova_compute[226433]: 2026-01-22 14:10:37.547 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:37 np0005592159 nova_compute[226433]: 2026-01-22 14:10:37.547 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:37 np0005592159 nova_compute[226433]: 2026-01-22 14:10:37.548 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:10:37 np0005592159 nova_compute[226433]: 2026-01-22 14:10:37.548 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:37.738+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:38 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:38.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:10:38 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2167703012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:10:38 np0005592159 nova_compute[226433]: 2026-01-22 14:10:38.581 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:38.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:38.724+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:38 np0005592159 nova_compute[226433]: 2026-01-22 14:10:38.731 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:10:38 np0005592159 nova_compute[226433]: 2026-01-22 14:10:38.732 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5146MB free_disk=20.888916015625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:10:38 np0005592159 nova_compute[226433]: 2026-01-22 14:10:38.732 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:38 np0005592159 nova_compute[226433]: 2026-01-22 14:10:38.732 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:38 np0005592159 nova_compute[226433]: 2026-01-22 14:10:38.985 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Successfully created port: 1bf106b6-ded0-49a9-a53d-2c3faebdf840 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.177 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.177 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 0c72e43b-d26a-47b8-ab7d-739190e552a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.177 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 2314cf64-76a5-4383-8f2e-58228261f71b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.178 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.178 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:10:39 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:39 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.355 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Successfully updated port: 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.420 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:39.678+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.686 226437 DEBUG nova.compute.manager [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received event network-changed-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.687 226437 DEBUG nova.compute.manager [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Refreshing instance network info cache due to event network-changed-3fe867d7-5ecf-4683-85f1-5f2bdce33a78. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.687 226437 DEBUG oslo_concurrency.lockutils [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-2314cf64-76a5-4383-8f2e-58228261f71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.688 226437 DEBUG oslo_concurrency.lockutils [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-2314cf64-76a5-4383-8f2e-58228261f71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.688 226437 DEBUG nova.network.neutron [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Refreshing network info cache for port 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.702 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "refresh_cache-2314cf64-76a5-4383-8f2e-58228261f71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:10:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:10:39 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2822757333' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.924 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.929 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.961 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:10:39 np0005592159 nova_compute[226433]: 2026-01-22 14:10:39.998 226437 DEBUG nova.network.neutron [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:10:40 np0005592159 nova_compute[226433]: 2026-01-22 14:10:40.004 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:10:40 np0005592159 nova_compute[226433]: 2026-01-22 14:10:40.004 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:40.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:40 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:40.665+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:40.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:41 np0005592159 nova_compute[226433]: 2026-01-22 14:10:41.018 226437 DEBUG nova.network.neutron [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:10:41 np0005592159 nova_compute[226433]: 2026-01-22 14:10:41.072 226437 DEBUG oslo_concurrency.lockutils [req-60fecde4-1422-426e-bf2a-2fea47efcd6a req-dd3fa51a-081d-464c-a50f-089e77dd3191 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-2314cf64-76a5-4383-8f2e-58228261f71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:10:41 np0005592159 nova_compute[226433]: 2026-01-22 14:10:41.073 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquired lock "refresh_cache-2314cf64-76a5-4383-8f2e-58228261f71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:10:41 np0005592159 nova_compute[226433]: 2026-01-22 14:10:41.074 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:10:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:41 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:41 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:41.660+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:10:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:42.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:10:42 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:42 np0005592159 nova_compute[226433]: 2026-01-22 14:10:42.490 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:10:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:42.648+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:42.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:43 np0005592159 nova_compute[226433]: 2026-01-22 14:10:42.999 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:43 np0005592159 nova_compute[226433]: 2026-01-22 14:10:43.000 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:43 np0005592159 nova_compute[226433]: 2026-01-22 14:10:43.252 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Successfully updated port: 1bf106b6-ded0-49a9-a53d-2c3faebdf840 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 22 09:10:43 np0005592159 nova_compute[226433]: 2026-01-22 14:10:43.339 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "refresh_cache-0c72e43b-d26a-47b8-ab7d-739190e552a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:10:43 np0005592159 nova_compute[226433]: 2026-01-22 14:10:43.339 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquired lock "refresh_cache-0c72e43b-d26a-47b8-ab7d-739190e552a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:10:43 np0005592159 nova_compute[226433]: 2026-01-22 14:10:43.340 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:10:43 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:43.662+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:44 np0005592159 nova_compute[226433]: 2026-01-22 14:10:44.254 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:10:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:10:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:44.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:10:44 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:44 np0005592159 nova_compute[226433]: 2026-01-22 14:10:44.574 226437 DEBUG nova.compute.manager [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received event network-changed-1bf106b6-ded0-49a9-a53d-2c3faebdf840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:10:44 np0005592159 nova_compute[226433]: 2026-01-22 14:10:44.575 226437 DEBUG nova.compute.manager [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Refreshing instance network info cache due to event network-changed-1bf106b6-ded0-49a9-a53d-2c3faebdf840. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 09:10:44 np0005592159 nova_compute[226433]: 2026-01-22 14:10:44.575 226437 DEBUG oslo_concurrency.lockutils [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-0c72e43b-d26a-47b8-ab7d-739190e552a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:10:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:44.639+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:10:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:44.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:10:45 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:45 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:45.684+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:46.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:46 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:46.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:46.710+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:47.180 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:47.181 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:47.181 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:47.712+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:47 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:47 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.853 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Updating instance_info_cache with network_info: [{"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.925 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Releasing lock "refresh_cache-2314cf64-76a5-4383-8f2e-58228261f71b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.925 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Instance network_info: |[{"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.927 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Start _get_guest_xml network_info=[{"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.930 226437 WARNING nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.941 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.942 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.953 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.954 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.955 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.955 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.955 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.956 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.956 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.956 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.956 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.956 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.957 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.957 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.957 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.957 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.960 226437 DEBUG nova.privsep.utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 22 09:10:47 np0005592159 nova_compute[226433]: 2026-01-22 14:10:47.961 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:48.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:10:48 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/61830410' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.378 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.406 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.410 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.518 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 09:10:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:48.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:48.677+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.684 226437 DEBUG nova.network.neutron [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Updating instance_info_cache with network_info: [{"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.702 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Releasing lock "refresh_cache-0c72e43b-d26a-47b8-ab7d-739190e552a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.703 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Instance network_info: |[{"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.703 226437 DEBUG oslo_concurrency.lockutils [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-0c72e43b-d26a-47b8-ab7d-739190e552a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.704 226437 DEBUG nova.network.neutron [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Refreshing network info cache for port 1bf106b6-ded0-49a9-a53d-2c3faebdf840 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.706 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Start _get_guest_xml network_info=[{"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.710 226437 WARNING nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.735 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.736 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.753 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.754 226437 DEBUG nova.virt.libvirt.host [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.755 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.755 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.755 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.756 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.756 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.756 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.756 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.756 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.757 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.757 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.757 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.757 226437 DEBUG nova.virt.hardware [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.760 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:10:48 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1226628873' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.842 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.844 226437 DEBUG nova.virt.libvirt.vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-2',id=6,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:10:11Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=2314cf64-76a5-4383-8f2e-58228261f71b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.844 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.846 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:38:78,bridge_name='br-int',has_traffic_filtering=True,id=3fe867d7-5ecf-4683-85f1-5f2bdce33a78,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fe867d7-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.849 226437 DEBUG nova.objects.instance [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2314cf64-76a5-4383-8f2e-58228261f71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.962 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] End _get_guest_xml xml=<domain type="kvm">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  <uuid>2314cf64-76a5-4383-8f2e-58228261f71b</uuid>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  <name>instance-00000006</name>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  <memory>131072</memory>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  <vcpu>1</vcpu>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  <metadata>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <nova:name>tempest-tempest.common.compute-instance-811251323-2</nova:name>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <nova:creationTime>2026-01-22 14:10:47</nova:creationTime>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <nova:flavor name="m1.nano">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <nova:memory>128</nova:memory>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <nova:disk>1</nova:disk>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <nova:swap>0</nova:swap>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <nova:vcpus>1</nova:vcpus>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      </nova:flavor>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <nova:owner>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <nova:user uuid="fd58a5335a8745f1b3ce1bd9a0439003">tempest-AutoAllocateNetworkTest-687426125-project-member</nova:user>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <nova:project uuid="e6c399bf43074b81b45ca1d976cb2b18">tempest-AutoAllocateNetworkTest-687426125</nova:project>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      </nova:owner>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <nova:ports>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <nova:port uuid="3fe867d7-5ecf-4683-85f1-5f2bdce33a78">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:          <nova:ip type="fixed" address="10.1.0.8" ipVersion="4"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:          <nova:ip type="fixed" address="fdfe:381f:8400::3c7" ipVersion="6"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        </nova:port>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      </nova:ports>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    </nova:instance>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  </metadata>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  <sysinfo type="smbios">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <system>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <entry name="manufacturer">RDO</entry>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <entry name="product">OpenStack Compute</entry>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <entry name="serial">2314cf64-76a5-4383-8f2e-58228261f71b</entry>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <entry name="uuid">2314cf64-76a5-4383-8f2e-58228261f71b</entry>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <entry name="family">Virtual Machine</entry>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    </system>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  </sysinfo>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  <os>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <boot dev="hd"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <smbios mode="sysinfo"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  </os>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  <features>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <acpi/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <apic/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <vmcoreinfo/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  </features>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  <clock offset="utc">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <timer name="hpet" present="no"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  </clock>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  <cpu mode="custom" match="exact">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <model>Nehalem</model>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  </cpu>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  <devices>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <disk type="network" device="disk">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <driver type="raw" cache="none"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <source protocol="rbd" name="vms/2314cf64-76a5-4383-8f2e-58228261f71b_disk">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      </source>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <auth username="openstack">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      </auth>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <target dev="vda" bus="virtio"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    </disk>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <disk type="network" device="cdrom">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <driver type="raw" cache="none"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <source protocol="rbd" name="vms/2314cf64-76a5-4383-8f2e-58228261f71b_disk.config">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      </source>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <auth username="openstack">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      </auth>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <target dev="sda" bus="sata"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    </disk>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <interface type="ethernet">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <mac address="fa:16:3e:c1:38:78"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <model type="virtio"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <driver name="vhost" rx_queue_size="512"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <mtu size="1442"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <target dev="tap3fe867d7-5e"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    </interface>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <serial type="pty">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <log file="/var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/console.log" append="off"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    </serial>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <video>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <model type="virtio"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    </video>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <input type="tablet" bus="usb"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <rng model="virtio">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <backend model="random">/dev/urandom</backend>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    </rng>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <controller type="usb" index="0"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    <memballoon model="virtio">
Jan 22 09:10:48 np0005592159 nova_compute[226433]:      <stats period="10"/>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:    </memballoon>
Jan 22 09:10:48 np0005592159 nova_compute[226433]:  </devices>
Jan 22 09:10:48 np0005592159 nova_compute[226433]: </domain>
Jan 22 09:10:48 np0005592159 nova_compute[226433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.964 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Preparing to wait for external event network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.964 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.964 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.965 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.965 226437 DEBUG nova.virt.libvirt.vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-2',id=6,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:10:11Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=2314cf64-76a5-4383-8f2e-58228261f71b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.966 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.967 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:38:78,bridge_name='br-int',has_traffic_filtering=True,id=3fe867d7-5ecf-4683-85f1-5f2bdce33a78,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fe867d7-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:10:48 np0005592159 nova_compute[226433]: 2026-01-22 14:10:48.968 226437 DEBUG os_vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:38:78,bridge_name='br-int',has_traffic_filtering=True,id=3fe867d7-5ecf-4683-85f1-5f2bdce33a78,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fe867d7-5e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 22 09:10:48 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.010 226437 DEBUG ovsdbapp.backend.ovs_idl [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.011 226437 DEBUG ovsdbapp.backend.ovs_idl [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.011 226437 DEBUG ovsdbapp.backend.ovs_idl [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.011 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.012 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [POLLOUT] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.013 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.013 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.014 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.017 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:49 np0005592159 podman[237339]: 2026-01-22 14:10:49.089522999 +0000 UTC m=+0.042017103 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.140 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.141 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.141 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.143 226437 INFO oslo.privsep.daemon [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpwbd1s1u6/privsep.sock']#033[00m
Jan 22 09:10:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:10:49 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1368713803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.211 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.236 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.241 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:10:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:10:49 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3062574627' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:10:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:49.654+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.659 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.661 226437 DEBUG nova.virt.libvirt.vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-1',id=5,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:10:10Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=0c72e43b-d26a-47b8-ab7d-739190e552a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.661 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.662 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:f4:90,bridge_name='br-int',has_traffic_filtering=True,id=1bf106b6-ded0-49a9-a53d-2c3faebdf840,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf106b6-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.663 226437 DEBUG nova.objects.instance [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0c72e43b-d26a-47b8-ab7d-739190e552a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.770 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] End _get_guest_xml xml=<domain type="kvm">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  <uuid>0c72e43b-d26a-47b8-ab7d-739190e552a5</uuid>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  <name>instance-00000005</name>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  <memory>131072</memory>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  <vcpu>1</vcpu>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  <metadata>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <nova:name>tempest-tempest.common.compute-instance-811251323-1</nova:name>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <nova:creationTime>2026-01-22 14:10:48</nova:creationTime>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <nova:flavor name="m1.nano">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <nova:memory>128</nova:memory>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <nova:disk>1</nova:disk>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <nova:swap>0</nova:swap>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <nova:vcpus>1</nova:vcpus>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      </nova:flavor>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <nova:owner>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <nova:user uuid="fd58a5335a8745f1b3ce1bd9a0439003">tempest-AutoAllocateNetworkTest-687426125-project-member</nova:user>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <nova:project uuid="e6c399bf43074b81b45ca1d976cb2b18">tempest-AutoAllocateNetworkTest-687426125</nova:project>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      </nova:owner>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <nova:ports>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <nova:port uuid="1bf106b6-ded0-49a9-a53d-2c3faebdf840">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:          <nova:ip type="fixed" address="10.1.0.29" ipVersion="4"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:          <nova:ip type="fixed" address="fdfe:381f:8400::7d" ipVersion="6"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        </nova:port>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      </nova:ports>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    </nova:instance>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  </metadata>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  <sysinfo type="smbios">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <system>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <entry name="manufacturer">RDO</entry>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <entry name="product">OpenStack Compute</entry>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <entry name="serial">0c72e43b-d26a-47b8-ab7d-739190e552a5</entry>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <entry name="uuid">0c72e43b-d26a-47b8-ab7d-739190e552a5</entry>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <entry name="family">Virtual Machine</entry>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    </system>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  </sysinfo>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  <os>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <boot dev="hd"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <smbios mode="sysinfo"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  </os>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  <features>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <acpi/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <apic/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <vmcoreinfo/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  </features>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  <clock offset="utc">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <timer name="hpet" present="no"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  </clock>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  <cpu mode="custom" match="exact">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <model>Nehalem</model>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  </cpu>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  <devices>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <disk type="network" device="disk">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <driver type="raw" cache="none"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <source protocol="rbd" name="vms/0c72e43b-d26a-47b8-ab7d-739190e552a5_disk">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      </source>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <auth username="openstack">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      </auth>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <target dev="vda" bus="virtio"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    </disk>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <disk type="network" device="cdrom">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <driver type="raw" cache="none"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <source protocol="rbd" name="vms/0c72e43b-d26a-47b8-ab7d-739190e552a5_disk.config">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      </source>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <auth username="openstack">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      </auth>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <target dev="sda" bus="sata"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    </disk>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <interface type="ethernet">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <mac address="fa:16:3e:91:f4:90"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <model type="virtio"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <driver name="vhost" rx_queue_size="512"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <mtu size="1442"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <target dev="tap1bf106b6-de"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    </interface>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <serial type="pty">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <log file="/var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/console.log" append="off"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    </serial>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <video>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <model type="virtio"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    </video>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <input type="tablet" bus="usb"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <rng model="virtio">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <backend model="random">/dev/urandom</backend>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    </rng>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <controller type="usb" index="0"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    <memballoon model="virtio">
Jan 22 09:10:49 np0005592159 nova_compute[226433]:      <stats period="10"/>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:    </memballoon>
Jan 22 09:10:49 np0005592159 nova_compute[226433]:  </devices>
Jan 22 09:10:49 np0005592159 nova_compute[226433]: </domain>
Jan 22 09:10:49 np0005592159 nova_compute[226433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.784 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Preparing to wait for external event network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.784 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.785 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.785 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.785 226437 DEBUG nova.virt.libvirt.vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-1',id=5,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:10:10Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=0c72e43b-d26a-47b8-ab7d-739190e552a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.786 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.786 226437 DEBUG nova.network.os_vif_util [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:f4:90,bridge_name='br-int',has_traffic_filtering=True,id=1bf106b6-ded0-49a9-a53d-2c3faebdf840,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf106b6-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.787 226437 DEBUG os_vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:f4:90,bridge_name='br-int',has_traffic_filtering=True,id=1bf106b6-ded0-49a9-a53d-2c3faebdf840,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf106b6-de') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.787 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.787 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.788 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.821 226437 INFO oslo.privsep.daemon [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.700 237484 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.703 237484 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.705 237484 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.705 237484 INFO oslo.privsep.daemon [-] privsep daemon running as pid 237484#033[00m
Jan 22 09:10:49 np0005592159 nova_compute[226433]: 2026-01-22 14:10:49.825 226437 WARNING oslo_privsep.priv_context [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] privsep daemon already running#033[00m
Jan 22 09:10:50 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:10:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:10:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:10:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.201 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.201 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3fe867d7-5e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.202 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3fe867d7-5e, col_values=(('external_ids', {'iface-id': '3fe867d7-5ecf-4683-85f1-5f2bdce33a78', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:38:78', 'vm-uuid': '2314cf64-76a5-4383-8f2e-58228261f71b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.203 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:50 np0005592159 NetworkManager[49000]: <info>  [1769091050.2050] manager: (tap3fe867d7-5e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.206 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.212 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.213 226437 INFO os_vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:38:78,bridge_name='br-int',has_traffic_filtering=True,id=3fe867d7-5ecf-4683-85f1-5f2bdce33a78,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fe867d7-5e')#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.215 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.215 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1bf106b6-de, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.216 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1bf106b6-de, col_values=(('external_ids', {'iface-id': '1bf106b6-ded0-49a9-a53d-2c3faebdf840', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:f4:90', 'vm-uuid': '0c72e43b-d26a-47b8-ab7d-739190e552a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:50 np0005592159 NetworkManager[49000]: <info>  [1769091050.2185] manager: (tap1bf106b6-de): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.218 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.222 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.227 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.228 226437 INFO os_vif [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:f4:90,bridge_name='br-int',has_traffic_filtering=True,id=1bf106b6-ded0-49a9-a53d-2c3faebdf840,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf106b6-de')#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.349 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.349 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.349 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No VIF found with MAC fa:16:3e:91:f4:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.349 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Using config drive#033[00m
Jan 22 09:10:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:50.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.374 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.385 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.386 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.386 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] No VIF found with MAC fa:16:3e:c1:38:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.386 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Using config drive#033[00m
Jan 22 09:10:50 np0005592159 nova_compute[226433]: 2026-01-22 14:10:50.411 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:50.635+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:50.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:51 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:51 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:51.617+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:52 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:52.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:52.641+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:52.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:53 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:53 np0005592159 nova_compute[226433]: 2026-01-22 14:10:53.181 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Creating config drive at /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/disk.config#033[00m
Jan 22 09:10:53 np0005592159 nova_compute[226433]: 2026-01-22 14:10:53.186 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp05_2ig4e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:53 np0005592159 nova_compute[226433]: 2026-01-22 14:10:53.312 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp05_2ig4e" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:53 np0005592159 nova_compute[226433]: 2026-01-22 14:10:53.338 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 2314cf64-76a5-4383-8f2e-58228261f71b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:53 np0005592159 nova_compute[226433]: 2026-01-22 14:10:53.341 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/disk.config 2314cf64-76a5-4383-8f2e-58228261f71b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:53 np0005592159 nova_compute[226433]: 2026-01-22 14:10:53.415 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:53 np0005592159 nova_compute[226433]: 2026-01-22 14:10:53.620 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/disk.config 2314cf64-76a5-4383-8f2e-58228261f71b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:53 np0005592159 nova_compute[226433]: 2026-01-22 14:10:53.621 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Deleting local config drive /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b/disk.config because it was imported into RBD.#033[00m
Jan 22 09:10:53 np0005592159 systemd[1]: Starting libvirt secret daemon...
Jan 22 09:10:53 np0005592159 systemd[1]: Started libvirt secret daemon.
Jan 22 09:10:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:53.678+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:53 np0005592159 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 22 09:10:53 np0005592159 NetworkManager[49000]: <info>  [1769091053.7084] manager: (tap3fe867d7-5e): new Tun device (/org/freedesktop/NetworkManager/Devices/25)
Jan 22 09:10:53 np0005592159 kernel: tap3fe867d7-5e: entered promiscuous mode
Jan 22 09:10:53 np0005592159 ovn_controller[133156]: 2026-01-22T14:10:53Z|00027|binding|INFO|Claiming lport 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 for this chassis.
Jan 22 09:10:53 np0005592159 ovn_controller[133156]: 2026-01-22T14:10:53Z|00028|binding|INFO|3fe867d7-5ecf-4683-85f1-5f2bdce33a78: Claiming fa:16:3e:c1:38:78 10.1.0.8 fdfe:381f:8400::3c7
Jan 22 09:10:53 np0005592159 nova_compute[226433]: 2026-01-22 14:10:53.712 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:53 np0005592159 nova_compute[226433]: 2026-01-22 14:10:53.718 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:53 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:53.739 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:38:78 10.1.0.8 fdfe:381f:8400::3c7'], port_security=['fa:16:3e:c1:38:78 10.1.0.8 fdfe:381f:8400::3c7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.8/26 fdfe:381f:8400::3c7/64', 'neutron:device_id': '2314cf64-76a5-4383-8f2e-58228261f71b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18c81f01-33be-49a1-a179-aecc87794f99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf8ad411-4de1-44ac-9786-b28073f7eae5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31fa5db4-01e0-4829-871e-73a496aafe58, chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=3fe867d7-5ecf-4683-85f1-5f2bdce33a78) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:10:53 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:53.740 143497 INFO neutron.agent.ovn.metadata.agent [-] Port 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 in datapath 18c81f01-33be-49a1-a179-aecc87794f99 bound to our chassis#033[00m
Jan 22 09:10:53 np0005592159 systemd-udevd[237607]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 09:10:53 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:53.742 143497 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18c81f01-33be-49a1-a179-aecc87794f99#033[00m
Jan 22 09:10:53 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:53.743 143497 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp_pg3kwj0/privsep.sock']#033[00m
Jan 22 09:10:53 np0005592159 NetworkManager[49000]: <info>  [1769091053.7559] device (tap3fe867d7-5e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 09:10:53 np0005592159 NetworkManager[49000]: <info>  [1769091053.7566] device (tap3fe867d7-5e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 09:10:53 np0005592159 systemd-machined[194970]: New machine qemu-1-instance-00000006.
Jan 22 09:10:53 np0005592159 systemd[1]: Started Virtual Machine qemu-1-instance-00000006.
Jan 22 09:10:53 np0005592159 nova_compute[226433]: 2026-01-22 14:10:53.793 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:53 np0005592159 ovn_controller[133156]: 2026-01-22T14:10:53Z|00029|binding|INFO|Setting lport 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 ovn-installed in OVS
Jan 22 09:10:53 np0005592159 ovn_controller[133156]: 2026-01-22T14:10:53Z|00030|binding|INFO|Setting lport 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 up in Southbound
Jan 22 09:10:53 np0005592159 nova_compute[226433]: 2026-01-22 14:10:53.804 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.137 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Creating config drive at /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/disk.config#033[00m
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.141 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe1anmr98 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:54 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.264 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe1anmr98" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.309 226437 DEBUG nova.storage.rbd_utils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] rbd image 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.312 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/disk.config 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.329 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091054.2725675, 2314cf64-76a5-4383-8f2e-58228261f71b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.330 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] VM Started (Lifecycle Event)#033[00m
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.361 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.365 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091054.27269, 2314cf64-76a5-4383-8f2e-58228261f71b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.365 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] VM Paused (Lifecycle Event)#033[00m
Jan 22 09:10:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:10:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:54.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.399 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.401 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:10:54 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.417 143497 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 09:10:54 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.418 143497 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_pg3kwj0/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 22 09:10:54 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.289 237689 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 09:10:54 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.292 237689 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 09:10:54 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.294 237689 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Jan 22 09:10:54 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.294 237689 INFO oslo.privsep.daemon [-] privsep daemon running as pid 237689#033[00m
Jan 22 09:10:54 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.421 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[e6abe1b3-6425-40f4-9cd0-3153fabe1009]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.431 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.456 226437 DEBUG oslo_concurrency.processutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/disk.config 0c72e43b-d26a-47b8-ab7d-739190e552a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.457 226437 INFO nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Deleting local config drive /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5/disk.config because it was imported into RBD.#033[00m
Jan 22 09:10:54 np0005592159 NetworkManager[49000]: <info>  [1769091054.5097] manager: (tap1bf106b6-de): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Jan 22 09:10:54 np0005592159 systemd-udevd[237605]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 09:10:54 np0005592159 kernel: tap1bf106b6-de: entered promiscuous mode
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.514 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:54 np0005592159 ovn_controller[133156]: 2026-01-22T14:10:54Z|00031|binding|INFO|Claiming lport 1bf106b6-ded0-49a9-a53d-2c3faebdf840 for this chassis.
Jan 22 09:10:54 np0005592159 ovn_controller[133156]: 2026-01-22T14:10:54Z|00032|binding|INFO|1bf106b6-ded0-49a9-a53d-2c3faebdf840: Claiming fa:16:3e:91:f4:90 10.1.0.29 fdfe:381f:8400::7d
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.520 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:54 np0005592159 NetworkManager[49000]: <info>  [1769091054.5294] device (tap1bf106b6-de): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 09:10:54 np0005592159 NetworkManager[49000]: <info>  [1769091054.5298] device (tap1bf106b6-de): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 09:10:54 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.529 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:f4:90 10.1.0.29 fdfe:381f:8400::7d'], port_security=['fa:16:3e:91:f4:90 10.1.0.29 fdfe:381f:8400::7d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.29/26 fdfe:381f:8400::7d/64', 'neutron:device_id': '0c72e43b-d26a-47b8-ab7d-739190e552a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18c81f01-33be-49a1-a179-aecc87794f99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf8ad411-4de1-44ac-9786-b28073f7eae5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31fa5db4-01e0-4829-871e-73a496aafe58, chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=1bf106b6-ded0-49a9-a53d-2c3faebdf840) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:10:54 np0005592159 ovn_controller[133156]: 2026-01-22T14:10:54Z|00033|binding|INFO|Setting lport 1bf106b6-ded0-49a9-a53d-2c3faebdf840 ovn-installed in OVS
Jan 22 09:10:54 np0005592159 ovn_controller[133156]: 2026-01-22T14:10:54Z|00034|binding|INFO|Setting lport 1bf106b6-ded0-49a9-a53d-2c3faebdf840 up in Southbound
Jan 22 09:10:54 np0005592159 nova_compute[226433]: 2026-01-22 14:10:54.538 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:54 np0005592159 systemd-machined[194970]: New machine qemu-2-instance-00000005.
Jan 22 09:10:54 np0005592159 systemd[1]: Started Virtual Machine qemu-2-instance-00000005.
Jan 22 09:10:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:54.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:54.699+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:10:54 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 22 09:10:54 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.983 237689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:54 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.983 237689 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:54 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:54.983 237689 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:55 np0005592159 nova_compute[226433]: 2026-01-22 14:10:55.054 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091055.0545008, 0c72e43b-d26a-47b8-ab7d-739190e552a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:10:55 np0005592159 nova_compute[226433]: 2026-01-22 14:10:55.055 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] VM Started (Lifecycle Event)#033[00m
Jan 22 09:10:55 np0005592159 nova_compute[226433]: 2026-01-22 14:10:55.095 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:10:55 np0005592159 nova_compute[226433]: 2026-01-22 14:10:55.098 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091055.0545843, 0c72e43b-d26a-47b8-ab7d-739190e552a5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:10:55 np0005592159 nova_compute[226433]: 2026-01-22 14:10:55.098 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] VM Paused (Lifecycle Event)#033[00m
Jan 22 09:10:55 np0005592159 nova_compute[226433]: 2026-01-22 14:10:55.151 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:10:55 np0005592159 nova_compute[226433]: 2026-01-22 14:10:55.154 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:10:55 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:55 np0005592159 nova_compute[226433]: 2026-01-22 14:10:55.201 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:10:55 np0005592159 nova_compute[226433]: 2026-01-22 14:10:55.218 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:55 np0005592159 nova_compute[226433]: 2026-01-22 14:10:55.253 226437 DEBUG nova.network.neutron [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Updated VIF entry in instance network info cache for port 1bf106b6-ded0-49a9-a53d-2c3faebdf840. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 09:10:55 np0005592159 nova_compute[226433]: 2026-01-22 14:10:55.254 226437 DEBUG nova.network.neutron [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Updating instance_info_cache with network_info: [{"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:10:55 np0005592159 nova_compute[226433]: 2026-01-22 14:10:55.289 226437 DEBUG oslo_concurrency.lockutils [req-f391ab1b-ce37-4d21-8528-acfb71bd2a08 req-b9bac49f-14c1-4cd5-9990-2c67d6a8cbdc 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-0c72e43b-d26a-47b8-ab7d-739190e552a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:10:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.643 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[4a7504ef-5ea4-4763-9186-1550852eb8cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.644 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap18c81f01-31 in ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 22 09:10:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.646 237689 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap18c81f01-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 22 09:10:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.646 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[27b237f2-aeb9-4d1b-a6e0-9a33e2cbc757]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.649 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[24a4ce15-afd3-49dc-acb6-7c72f965d268]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:55.660+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.670 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[b8030c21-55bc-4838-9e28-f185a3e3601f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.694 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[51a2ef51-e907-4b29-a57b-3332a7821ff7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:55.696 143497 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp3y50ov6x/privsep.sock']#033[00m
Jan 22 09:10:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:10:56 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:10:56 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:10:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:56.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.370 226437 DEBUG nova.compute.manager [req-d8966921-3e95-4ac8-9be1-9f9bf4b29565 req-b69dbb69-d0c6-47f2-a9b0-2b1c494776ef 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received event network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.371 226437 DEBUG oslo_concurrency.lockutils [req-d8966921-3e95-4ac8-9be1-9f9bf4b29565 req-b69dbb69-d0c6-47f2-a9b0-2b1c494776ef 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.371 226437 DEBUG oslo_concurrency.lockutils [req-d8966921-3e95-4ac8-9be1-9f9bf4b29565 req-b69dbb69-d0c6-47f2-a9b0-2b1c494776ef 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.371 226437 DEBUG oslo_concurrency.lockutils [req-d8966921-3e95-4ac8-9be1-9f9bf4b29565 req-b69dbb69-d0c6-47f2-a9b0-2b1c494776ef 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.371 226437 DEBUG nova.compute.manager [req-d8966921-3e95-4ac8-9be1-9f9bf4b29565 req-b69dbb69-d0c6-47f2-a9b0-2b1c494776ef 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Processing event network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.372 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.376 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.376 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091056.3766308, 2314cf64-76a5-4383-8f2e-58228261f71b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.377 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] VM Resumed (Lifecycle Event)#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.382 226437 INFO nova.virt.libvirt.driver [-] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Instance spawned successfully.#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.382 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 22 09:10:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.398 143497 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 09:10:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.399 143497 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp3y50ov6x/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Jan 22 09:10:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.254 237788 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 09:10:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.258 237788 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 09:10:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.260 237788 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Jan 22 09:10:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.260 237788 INFO oslo.privsep.daemon [-] privsep daemon running as pid 237788#033[00m
Jan 22 09:10:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.401 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[d4af23fb-8ec3-4848-9de5-8532433215f2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.476 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.486 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.493 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.494 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.495 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.496 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.497 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.500 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.565 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.618 226437 INFO nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Took 44.79 seconds to spawn the instance on the hypervisor.#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.619 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:10:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:56.676+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:56.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.711 226437 INFO nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Took 46.59 seconds to build instance.#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.750 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 46.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.916 237788 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.916 237788 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:56.916 237788 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.975 226437 DEBUG nova.compute.manager [req-e9aec100-fb76-4e10-a1d0-517f542817f7 req-47bd8bb4-5803-484c-9d9b-6a499eddc437 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received event network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.976 226437 DEBUG oslo_concurrency.lockutils [req-e9aec100-fb76-4e10-a1d0-517f542817f7 req-47bd8bb4-5803-484c-9d9b-6a499eddc437 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.976 226437 DEBUG oslo_concurrency.lockutils [req-e9aec100-fb76-4e10-a1d0-517f542817f7 req-47bd8bb4-5803-484c-9d9b-6a499eddc437 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.976 226437 DEBUG oslo_concurrency.lockutils [req-e9aec100-fb76-4e10-a1d0-517f542817f7 req-47bd8bb4-5803-484c-9d9b-6a499eddc437 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.976 226437 DEBUG nova.compute.manager [req-e9aec100-fb76-4e10-a1d0-517f542817f7 req-47bd8bb4-5803-484c-9d9b-6a499eddc437 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Processing event network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.977 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.980 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091056.980116, 0c72e43b-d26a-47b8-ab7d-739190e552a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.980 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] VM Resumed (Lifecycle Event)#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.982 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.984 226437 INFO nova.virt.libvirt.driver [-] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Instance spawned successfully.#033[00m
Jan 22 09:10:56 np0005592159 nova_compute[226433]: 2026-01-22 14:10:56.985 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.025 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.029 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.066 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.067 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.067 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.068 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.068 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.068 226437 DEBUG nova.virt.libvirt.driver [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.072 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.212 226437 INFO nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Took 46.12 seconds to spawn the instance on the hypervisor.#033[00m
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.212 226437 DEBUG nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:10:57 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.310 226437 INFO nova.compute.manager [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Took 47.28 seconds to build instance.#033[00m
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.388 226437 DEBUG oslo_concurrency.lockutils [None req-2078d650-7484-48af-b56d-96b35d950cec fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 47.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.526 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[e163408c-4062-45a8-a111-26c3f8c4f82b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.544 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[fa320ec6-6547-494e-b615-80e18c454830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:57 np0005592159 NetworkManager[49000]: <info>  [1769091057.5508] manager: (tap18c81f01-30): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Jan 22 09:10:57 np0005592159 systemd-udevd[237801]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.573 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[35ecab8b-8761-4b9c-ba58-b6ddfc1e8e62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.576 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[da3ec314-8faa-424a-a895-343eb0cd5c7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:57 np0005592159 NetworkManager[49000]: <info>  [1769091057.5989] device (tap18c81f01-30): carrier: link connected
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.602 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[f52c52cb-d3b2-47a4-aad2-b6f975519ee3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.617 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec81daa-d6b4-46ff-9d59-2ee90e9ac2dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18c81f01-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:9e:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490830, 'reachable_time': 33686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237819, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.636 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[0230b09b-487d-40b8-bf41-ac8ae3813b03]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe66:9efc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490830, 'tstamp': 490830}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237820, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.650 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[b72157e6-9adb-43ca-9f8f-46ce76fb167d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18c81f01-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:9e:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490830, 'reachable_time': 33686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 237821, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.672 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[708272f9-9b10-4571-bfba-fbdf6c504bc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:57.676+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.719 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[88963e28-03e2-4534-bd72-48eee44ad4c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.721 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18c81f01-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.721 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.722 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18c81f01-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.724 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:57 np0005592159 NetworkManager[49000]: <info>  [1769091057.7246] manager: (tap18c81f01-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Jan 22 09:10:57 np0005592159 kernel: tap18c81f01-30: entered promiscuous mode
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.726 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.728 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18c81f01-30, col_values=(('external_ids', {'iface-id': '27625ef7-8ad4-4498-ac70-a911e819f701'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:57 np0005592159 ovn_controller[133156]: 2026-01-22T14:10:57Z|00035|binding|INFO|Releasing lport 27625ef7-8ad4-4498-ac70-a911e819f701 from this chassis (sb_readonly=0)
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.729 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:57 np0005592159 nova_compute[226433]: 2026-01-22 14:10:57.745 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.747 143497 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18c81f01-33be-49a1-a179-aecc87794f99.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18c81f01-33be-49a1-a179-aecc87794f99.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.748 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[b64d1c25-9783-430c-b249-b51875b8d757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.751 143497 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: global
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    log         /dev/log local0 debug
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    log-tag     haproxy-metadata-proxy-18c81f01-33be-49a1-a179-aecc87794f99
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    user        root
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    group       root
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    maxconn     1024
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    pidfile     /var/lib/neutron/external/pids/18c81f01-33be-49a1-a179-aecc87794f99.pid.haproxy
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    daemon
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: defaults
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    log global
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    mode http
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    option httplog
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    option dontlognull
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    option http-server-close
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    option forwardfor
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    retries                 3
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    timeout http-request    30s
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    timeout connect         30s
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    timeout client          32s
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    timeout server          32s
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    timeout http-keep-alive 30s
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: listen listener
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    bind 169.254.169.254:80
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    server metadata /var/lib/neutron/metadata_proxy
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]:    http-request add-header X-OVN-Network-ID 18c81f01-33be-49a1-a179-aecc87794f99
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 22 09:10:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:57.753 143497 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'env', 'PROCESS_TAG=haproxy-18c81f01-33be-49a1-a179-aecc87794f99', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/18c81f01-33be-49a1-a179-aecc87794f99.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 22 09:10:58 np0005592159 podman[237904]: 2026-01-22 14:10:58.163775819 +0000 UTC m=+0.059168616 container create 7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:10:58 np0005592159 systemd[1]: Started libpod-conmon-7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7.scope.
Jan 22 09:10:58 np0005592159 podman[237904]: 2026-01-22 14:10:58.135583193 +0000 UTC m=+0.030976030 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 09:10:58 np0005592159 systemd[1]: Started libcrun container.
Jan 22 09:10:58 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39f1a7dbb9fbf437360a4b9755ab2b91a6644c636b82f1e3d91c08d6fa81b3c7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 09:10:58 np0005592159 podman[237904]: 2026-01-22 14:10:58.268931652 +0000 UTC m=+0.164324479 container init 7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:10:58 np0005592159 podman[237904]: 2026-01-22 14:10:58.294474107 +0000 UTC m=+0.189866904 container start 7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 22 09:10:58 np0005592159 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[237920]: [NOTICE]   (237924) : New worker (237926) forked
Jan 22 09:10:58 np0005592159 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[237920]: [NOTICE]   (237924) : Loading success.
Jan 22 09:10:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.355 143497 INFO neutron.agent.ovn.metadata.agent [-] Port 1bf106b6-ded0-49a9-a53d-2c3faebdf840 in datapath 18c81f01-33be-49a1-a179-aecc87794f99 unbound from our chassis#033[00m
Jan 22 09:10:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.358 143497 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18c81f01-33be-49a1-a179-aecc87794f99#033[00m
Jan 22 09:10:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:10:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:10:58.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:10:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.373 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[fea5eb6f-82f8-4fab-804e-eebbf828cb85]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:58 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:10:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:10:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.403 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[306f60a5-e170-4ed9-a7cb-21befa6f9c48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.408 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[5ee151a3-79f4-4d5f-ade5-5d53261571eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:58 np0005592159 nova_compute[226433]: 2026-01-22 14:10:58.416 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.438 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[7f4941cc-64f2-43a5-a288-f8723255a487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.454 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[825b1fcf-11fd-4ccc-b3f9-891f485b5dc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18c81f01-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:9e:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 176, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 176, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490830, 'reachable_time': 33686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237940, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.470 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[1b592c4c-d3ea-48a4-a9e3-ffcd322cbe5f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap18c81f01-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490839, 'tstamp': 490839}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237941, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 26, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.0.2'], ['IFA_LOCAL', '10.1.0.2'], ['IFA_BROADCAST', '10.1.0.63'], ['IFA_LABEL', 'tap18c81f01-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490842, 'tstamp': 490842}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237941, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:10:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.471 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18c81f01-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:58 np0005592159 nova_compute[226433]: 2026-01-22 14:10:58.473 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:10:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.476 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18c81f01-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.476 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:10:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.476 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18c81f01-30, col_values=(('external_ids', {'iface-id': '27625ef7-8ad4-4498-ac70-a911e819f701'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:10:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:10:58.477 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:10:58 np0005592159 nova_compute[226433]: 2026-01-22 14:10:58.643 226437 DEBUG nova.compute.manager [req-2164f4b4-9bb8-4171-87ea-26360727a84b req-38b0efde-c37e-4dd7-b161-b78e81a6793a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received event network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:10:58 np0005592159 nova_compute[226433]: 2026-01-22 14:10:58.644 226437 DEBUG oslo_concurrency.lockutils [req-2164f4b4-9bb8-4171-87ea-26360727a84b req-38b0efde-c37e-4dd7-b161-b78e81a6793a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:58 np0005592159 nova_compute[226433]: 2026-01-22 14:10:58.644 226437 DEBUG oslo_concurrency.lockutils [req-2164f4b4-9bb8-4171-87ea-26360727a84b req-38b0efde-c37e-4dd7-b161-b78e81a6793a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:58 np0005592159 nova_compute[226433]: 2026-01-22 14:10:58.644 226437 DEBUG oslo_concurrency.lockutils [req-2164f4b4-9bb8-4171-87ea-26360727a84b req-38b0efde-c37e-4dd7-b161-b78e81a6793a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:58 np0005592159 nova_compute[226433]: 2026-01-22 14:10:58.644 226437 DEBUG nova.compute.manager [req-2164f4b4-9bb8-4171-87ea-26360727a84b req-38b0efde-c37e-4dd7-b161-b78e81a6793a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] No waiting events found dispatching network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:10:58 np0005592159 nova_compute[226433]: 2026-01-22 14:10:58.645 226437 WARNING nova.compute.manager [req-2164f4b4-9bb8-4171-87ea-26360727a84b req-38b0efde-c37e-4dd7-b161-b78e81a6793a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received unexpected event network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 for instance with vm_state active and task_state None.#033[00m
Jan 22 09:10:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:10:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:10:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:10:58.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:10:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:58.720+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:59 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:10:59 np0005592159 nova_compute[226433]: 2026-01-22 14:10:59.528 226437 DEBUG nova.compute.manager [req-30637151-d29e-47d7-a2b3-2fc4cbd87260 req-4797e10f-2b64-4c7a-9097-98b1a05cb2cf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received event network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:10:59 np0005592159 nova_compute[226433]: 2026-01-22 14:10:59.529 226437 DEBUG oslo_concurrency.lockutils [req-30637151-d29e-47d7-a2b3-2fc4cbd87260 req-4797e10f-2b64-4c7a-9097-98b1a05cb2cf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:10:59 np0005592159 nova_compute[226433]: 2026-01-22 14:10:59.529 226437 DEBUG oslo_concurrency.lockutils [req-30637151-d29e-47d7-a2b3-2fc4cbd87260 req-4797e10f-2b64-4c7a-9097-98b1a05cb2cf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:10:59 np0005592159 nova_compute[226433]: 2026-01-22 14:10:59.530 226437 DEBUG oslo_concurrency.lockutils [req-30637151-d29e-47d7-a2b3-2fc4cbd87260 req-4797e10f-2b64-4c7a-9097-98b1a05cb2cf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:10:59 np0005592159 nova_compute[226433]: 2026-01-22 14:10:59.530 226437 DEBUG nova.compute.manager [req-30637151-d29e-47d7-a2b3-2fc4cbd87260 req-4797e10f-2b64-4c7a-9097-98b1a05cb2cf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] No waiting events found dispatching network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:10:59 np0005592159 nova_compute[226433]: 2026-01-22 14:10:59.530 226437 WARNING nova.compute.manager [req-30637151-d29e-47d7-a2b3-2fc4cbd87260 req-4797e10f-2b64-4c7a-9097-98b1a05cb2cf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received unexpected event network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 for instance with vm_state active and task_state None.#033[00m
Jan 22 09:10:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:10:59.687+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:10:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:00 np0005592159 nova_compute[226433]: 2026-01-22 14:11:00.220 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:00.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:00 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:00 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 2048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:00.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:00.711+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:01 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:01.732+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:11:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:02.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:11:02 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:02.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:02.740+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:02 np0005592159 nova_compute[226433]: 2026-01-22 14:11:02.758 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:02 np0005592159 nova_compute[226433]: 2026-01-22 14:11:02.759 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:02 np0005592159 nova_compute[226433]: 2026-01-22 14:11:02.759 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:02 np0005592159 nova_compute[226433]: 2026-01-22 14:11:02.759 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:02 np0005592159 nova_compute[226433]: 2026-01-22 14:11:02.759 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:02 np0005592159 nova_compute[226433]: 2026-01-22 14:11:02.761 226437 INFO nova.compute.manager [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Terminating instance#033[00m
Jan 22 09:11:02 np0005592159 nova_compute[226433]: 2026-01-22 14:11:02.762 226437 DEBUG nova.compute.manager [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 22 09:11:02 np0005592159 kernel: tap1bf106b6-de (unregistering): left promiscuous mode
Jan 22 09:11:02 np0005592159 NetworkManager[49000]: <info>  [1769091062.8221] device (tap1bf106b6-de): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 22 09:11:02 np0005592159 nova_compute[226433]: 2026-01-22 14:11:02.836 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:02 np0005592159 ovn_controller[133156]: 2026-01-22T14:11:02Z|00036|binding|INFO|Releasing lport 1bf106b6-ded0-49a9-a53d-2c3faebdf840 from this chassis (sb_readonly=0)
Jan 22 09:11:02 np0005592159 ovn_controller[133156]: 2026-01-22T14:11:02Z|00037|binding|INFO|Setting lport 1bf106b6-ded0-49a9-a53d-2c3faebdf840 down in Southbound
Jan 22 09:11:02 np0005592159 ovn_controller[133156]: 2026-01-22T14:11:02Z|00038|binding|INFO|Removing iface tap1bf106b6-de ovn-installed in OVS
Jan 22 09:11:02 np0005592159 nova_compute[226433]: 2026-01-22 14:11:02.840 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.847 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:f4:90 10.1.0.29 fdfe:381f:8400::7d'], port_security=['fa:16:3e:91:f4:90 10.1.0.29 fdfe:381f:8400::7d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.29/26 fdfe:381f:8400::7d/64', 'neutron:device_id': '0c72e43b-d26a-47b8-ab7d-739190e552a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18c81f01-33be-49a1-a179-aecc87794f99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf8ad411-4de1-44ac-9786-b28073f7eae5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31fa5db4-01e0-4829-871e-73a496aafe58, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=1bf106b6-ded0-49a9-a53d-2c3faebdf840) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:11:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.848 143497 INFO neutron.agent.ovn.metadata.agent [-] Port 1bf106b6-ded0-49a9-a53d-2c3faebdf840 in datapath 18c81f01-33be-49a1-a179-aecc87794f99 unbound from our chassis#033[00m
Jan 22 09:11:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.851 143497 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18c81f01-33be-49a1-a179-aecc87794f99#033[00m
Jan 22 09:11:02 np0005592159 nova_compute[226433]: 2026-01-22 14:11:02.859 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.867 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[7e29ecc5-bbc7-4d9b-9494-3e63f95df026]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.898 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[2b605f6f-a761-47dd-a16d-0354621139c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.901 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7fef56-c21a-4e32-8aa0-fc218f343806]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:02 np0005592159 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 22 09:11:02 np0005592159 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000005.scope: Consumed 6.431s CPU time.
Jan 22 09:11:02 np0005592159 systemd-machined[194970]: Machine qemu-2-instance-00000005 terminated.
Jan 22 09:11:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.936 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1e5507-300d-466b-b618-8317c46c093f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.952 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[d77f5c8b-fdda-4516-b6c7-9527ee4a7044]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18c81f01-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:66:9e:fc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 14], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490830, 'reachable_time': 33686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 237956, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.966 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[c88de1e1-a447-478a-84b6-2a4e56b165ea]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap18c81f01-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490839, 'tstamp': 490839}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237957, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 26, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.1.0.2'], ['IFA_LOCAL', '10.1.0.2'], ['IFA_BROADCAST', '10.1.0.63'], ['IFA_LABEL', 'tap18c81f01-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490842, 'tstamp': 490842}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 237957, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.968 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18c81f01-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:11:02 np0005592159 nova_compute[226433]: 2026-01-22 14:11:02.969 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:02 np0005592159 nova_compute[226433]: 2026-01-22 14:11:02.976 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.976 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18c81f01-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:11:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.977 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:11:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.977 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18c81f01-30, col_values=(('external_ids', {'iface-id': '27625ef7-8ad4-4498-ac70-a911e819f701'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:11:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:02.978 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:11:02 np0005592159 nova_compute[226433]: 2026-01-22 14:11:02.996 226437 INFO nova.virt.libvirt.driver [-] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Instance destroyed successfully.#033[00m
Jan 22 09:11:02 np0005592159 nova_compute[226433]: 2026-01-22 14:11:02.996 226437 DEBUG nova.objects.instance [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'resources' on Instance uuid 0c72e43b-d26a-47b8-ab7d-739190e552a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.025 226437 DEBUG nova.virt.libvirt.vif [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-1',id=5,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-22T14:10:57Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:10:57Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=0c72e43b-d26a-47b8-ab7d-739190e552a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.025 226437 DEBUG nova.network.os_vif_util [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "address": "fa:16:3e:91:f4:90", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::7d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1bf106b6-de", "ovs_interfaceid": "1bf106b6-ded0-49a9-a53d-2c3faebdf840", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.026 226437 DEBUG nova.network.os_vif_util [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:f4:90,bridge_name='br-int',has_traffic_filtering=True,id=1bf106b6-ded0-49a9-a53d-2c3faebdf840,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf106b6-de') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.026 226437 DEBUG os_vif [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:f4:90,bridge_name='br-int',has_traffic_filtering=True,id=1bf106b6-ded0-49a9-a53d-2c3faebdf840,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf106b6-de') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.028 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.028 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1bf106b6-de, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.029 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.031 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.033 226437 INFO os_vif [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:f4:90,bridge_name='br-int',has_traffic_filtering=True,id=1bf106b6-ded0-49a9-a53d-2c3faebdf840,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1bf106b6-de')#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.278 226437 DEBUG nova.compute.manager [req-fa62732d-d937-4117-84a4-bd673e93277d req-f2a5846d-1982-4af2-8ee0-62cb18c3a65a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received event network-vif-unplugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.279 226437 DEBUG oslo_concurrency.lockutils [req-fa62732d-d937-4117-84a4-bd673e93277d req-f2a5846d-1982-4af2-8ee0-62cb18c3a65a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.279 226437 DEBUG oslo_concurrency.lockutils [req-fa62732d-d937-4117-84a4-bd673e93277d req-f2a5846d-1982-4af2-8ee0-62cb18c3a65a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.279 226437 DEBUG oslo_concurrency.lockutils [req-fa62732d-d937-4117-84a4-bd673e93277d req-f2a5846d-1982-4af2-8ee0-62cb18c3a65a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.279 226437 DEBUG nova.compute.manager [req-fa62732d-d937-4117-84a4-bd673e93277d req-f2a5846d-1982-4af2-8ee0-62cb18c3a65a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] No waiting events found dispatching network-vif-unplugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.279 226437 DEBUG nova.compute.manager [req-fa62732d-d937-4117-84a4-bd673e93277d req-f2a5846d-1982-4af2-8ee0-62cb18c3a65a 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received event network-vif-unplugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.419 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.521 226437 INFO nova.virt.libvirt.driver [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Deleting instance files /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5_del#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.522 226437 INFO nova.virt.libvirt.driver [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Deletion of /var/lib/nova/instances/0c72e43b-d26a-47b8-ab7d-739190e552a5_del complete#033[00m
Jan 22 09:11:03 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.648 226437 DEBUG nova.virt.libvirt.host [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.649 226437 INFO nova.virt.libvirt.host [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] UEFI support detected#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.651 226437 INFO nova.compute.manager [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.651 226437 DEBUG oslo.service.loopingcall [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.651 226437 DEBUG nova.compute.manager [-] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 22 09:11:03 np0005592159 nova_compute[226433]: 2026-01-22 14:11:03.651 226437 DEBUG nova.network.neutron [-] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 22 09:11:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:03.724+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:04.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:04 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:04.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:04.769+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:05 np0005592159 nova_compute[226433]: 2026-01-22 14:11:05.535 226437 DEBUG nova.compute.manager [req-83f18f41-79e6-4575-8707-3dbd6c7a2f14 req-3d19e49c-ad19-4caf-bbf8-da970b07918e 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received event network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:11:05 np0005592159 nova_compute[226433]: 2026-01-22 14:11:05.535 226437 DEBUG oslo_concurrency.lockutils [req-83f18f41-79e6-4575-8707-3dbd6c7a2f14 req-3d19e49c-ad19-4caf-bbf8-da970b07918e 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:05 np0005592159 nova_compute[226433]: 2026-01-22 14:11:05.535 226437 DEBUG oslo_concurrency.lockutils [req-83f18f41-79e6-4575-8707-3dbd6c7a2f14 req-3d19e49c-ad19-4caf-bbf8-da970b07918e 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:05 np0005592159 nova_compute[226433]: 2026-01-22 14:11:05.535 226437 DEBUG oslo_concurrency.lockutils [req-83f18f41-79e6-4575-8707-3dbd6c7a2f14 req-3d19e49c-ad19-4caf-bbf8-da970b07918e 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:05 np0005592159 nova_compute[226433]: 2026-01-22 14:11:05.536 226437 DEBUG nova.compute.manager [req-83f18f41-79e6-4575-8707-3dbd6c7a2f14 req-3d19e49c-ad19-4caf-bbf8-da970b07918e 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] No waiting events found dispatching network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:11:05 np0005592159 nova_compute[226433]: 2026-01-22 14:11:05.536 226437 WARNING nova.compute.manager [req-83f18f41-79e6-4575-8707-3dbd6c7a2f14 req-3d19e49c-ad19-4caf-bbf8-da970b07918e 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received unexpected event network-vif-plugged-1bf106b6-ded0-49a9-a53d-2c3faebdf840 for instance with vm_state active and task_state deleting.#033[00m
Jan 22 09:11:05 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:05 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:05 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:05.781+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:05 np0005592159 nova_compute[226433]: 2026-01-22 14:11:05.812 226437 DEBUG nova.network.neutron [-] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:11:05 np0005592159 nova_compute[226433]: 2026-01-22 14:11:05.842 226437 INFO nova.compute.manager [-] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Took 2.19 seconds to deallocate network for instance.#033[00m
Jan 22 09:11:05 np0005592159 nova_compute[226433]: 2026-01-22 14:11:05.914 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:05 np0005592159 nova_compute[226433]: 2026-01-22 14:11:05.915 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:05 np0005592159 nova_compute[226433]: 2026-01-22 14:11:05.984 226437 DEBUG nova.compute.manager [req-f97b4fcb-32a0-45f9-b287-ec7fdcfb7696 req-a95b3825-3443-426c-a753-d5295e3e6198 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Received event network-vif-deleted-1bf106b6-ded0-49a9-a53d-2c3faebdf840 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:11:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:06.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:06 np0005592159 nova_compute[226433]: 2026-01-22 14:11:06.436 226437 DEBUG oslo_concurrency.processutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:06 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:11:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:06.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:11:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:06.772+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:11:06 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3749524116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:11:06 np0005592159 nova_compute[226433]: 2026-01-22 14:11:06.887 226437 DEBUG oslo_concurrency.processutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:06 np0005592159 nova_compute[226433]: 2026-01-22 14:11:06.892 226437 DEBUG nova.compute.provider_tree [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:11:06 np0005592159 nova_compute[226433]: 2026-01-22 14:11:06.922 226437 DEBUG nova.scheduler.client.report [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:11:06 np0005592159 nova_compute[226433]: 2026-01-22 14:11:06.959 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:07 np0005592159 nova_compute[226433]: 2026-01-22 14:11:07.018 226437 INFO nova.scheduler.client.report [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Deleted allocations for instance 0c72e43b-d26a-47b8-ab7d-739190e552a5#033[00m
Jan 22 09:11:07 np0005592159 podman[238015]: 2026-01-22 14:11:07.062441625 +0000 UTC m=+0.116237456 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 09:11:07 np0005592159 nova_compute[226433]: 2026-01-22 14:11:07.197 226437 DEBUG oslo_concurrency.lockutils [None req-529985c0-98b2-4e85-98f8-41f7b0db6b19 fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "0c72e43b-d26a-47b8-ab7d-739190e552a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:07 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:07.782+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:08 np0005592159 nova_compute[226433]: 2026-01-22 14:11:08.030 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:08.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:08 np0005592159 nova_compute[226433]: 2026-01-22 14:11:08.421 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:08 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:08.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:08.804+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:08 np0005592159 nova_compute[226433]: 2026-01-22 14:11:08.899 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:08 np0005592159 nova_compute[226433]: 2026-01-22 14:11:08.900 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:08 np0005592159 nova_compute[226433]: 2026-01-22 14:11:08.900 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:08 np0005592159 nova_compute[226433]: 2026-01-22 14:11:08.900 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:08 np0005592159 nova_compute[226433]: 2026-01-22 14:11:08.900 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:08 np0005592159 nova_compute[226433]: 2026-01-22 14:11:08.902 226437 INFO nova.compute.manager [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Terminating instance#033[00m
Jan 22 09:11:08 np0005592159 nova_compute[226433]: 2026-01-22 14:11:08.904 226437 DEBUG nova.compute.manager [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 22 09:11:09 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:09 np0005592159 kernel: tap3fe867d7-5e (unregistering): left promiscuous mode
Jan 22 09:11:09 np0005592159 NetworkManager[49000]: <info>  [1769091069.6921] device (tap3fe867d7-5e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 22 09:11:09 np0005592159 ovn_controller[133156]: 2026-01-22T14:11:09Z|00039|binding|INFO|Releasing lport 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 from this chassis (sb_readonly=0)
Jan 22 09:11:09 np0005592159 ovn_controller[133156]: 2026-01-22T14:11:09Z|00040|binding|INFO|Setting lport 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 down in Southbound
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.707 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:09 np0005592159 ovn_controller[133156]: 2026-01-22T14:11:09Z|00041|binding|INFO|Removing iface tap3fe867d7-5e ovn-installed in OVS
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.710 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:09.726 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:38:78 10.1.0.8 fdfe:381f:8400::3c7'], port_security=['fa:16:3e:c1:38:78 10.1.0.8 fdfe:381f:8400::3c7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.1.0.8/26 fdfe:381f:8400::3c7/64', 'neutron:device_id': '2314cf64-76a5-4383-8f2e-58228261f71b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18c81f01-33be-49a1-a179-aecc87794f99', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e6c399bf43074b81b45ca1d976cb2b18', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf8ad411-4de1-44ac-9786-b28073f7eae5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=31fa5db4-01e0-4829-871e-73a496aafe58, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=3fe867d7-5ecf-4683-85f1-5f2bdce33a78) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:11:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:09.727 143497 INFO neutron.agent.ovn.metadata.agent [-] Port 3fe867d7-5ecf-4683-85f1-5f2bdce33a78 in datapath 18c81f01-33be-49a1-a179-aecc87794f99 unbound from our chassis#033[00m
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.728 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:09.729 143497 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 18c81f01-33be-49a1-a179-aecc87794f99, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 22 09:11:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:09.730 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[fbcfcc19-6825-4935-9467-7e7bb3ad4925]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:09.732 143497 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 namespace which is not needed anymore#033[00m
Jan 22 09:11:09 np0005592159 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 22 09:11:09 np0005592159 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Consumed 12.898s CPU time.
Jan 22 09:11:09 np0005592159 systemd-machined[194970]: Machine qemu-1-instance-00000006 terminated.
Jan 22 09:11:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:09.823+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:09 np0005592159 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[237920]: [NOTICE]   (237924) : haproxy version is 2.8.14-c23fe91
Jan 22 09:11:09 np0005592159 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[237920]: [NOTICE]   (237924) : path to executable is /usr/sbin/haproxy
Jan 22 09:11:09 np0005592159 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[237920]: [WARNING]  (237924) : Exiting Master process...
Jan 22 09:11:09 np0005592159 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[237920]: [ALERT]    (237924) : Current worker (237926) exited with code 143 (Terminated)
Jan 22 09:11:09 np0005592159 neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99[237920]: [WARNING]  (237924) : All workers exited. Exiting... (0)
Jan 22 09:11:09 np0005592159 systemd[1]: libpod-7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7.scope: Deactivated successfully.
Jan 22 09:11:09 np0005592159 podman[238116]: 2026-01-22 14:11:09.890640545 +0000 UTC m=+0.057498772 container died 7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:11:09 np0005592159 systemd[1]: var-lib-containers-storage-overlay-39f1a7dbb9fbf437360a4b9755ab2b91a6644c636b82f1e3d91c08d6fa81b3c7-merged.mount: Deactivated successfully.
Jan 22 09:11:09 np0005592159 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7-userdata-shm.mount: Deactivated successfully.
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.925 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:09 np0005592159 podman[238116]: 2026-01-22 14:11:09.926671778 +0000 UTC m=+0.093529975 container cleanup 7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.930 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:09 np0005592159 systemd[1]: libpod-conmon-7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7.scope: Deactivated successfully.
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.938 226437 INFO nova.virt.libvirt.driver [-] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Instance destroyed successfully.#033[00m
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.939 226437 DEBUG nova.objects.instance [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lazy-loading 'resources' on Instance uuid 2314cf64-76a5-4383-8f2e-58228261f71b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.963 226437 DEBUG nova.virt.libvirt.vif [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T14:10:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-tempest.common.compute-instance-811251323-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-811251323-2',id=6,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-22T14:10:56Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e6c399bf43074b81b45ca1d976cb2b18',ramdisk_id='',reservation_id='r-qn3kupwc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AutoAllocateNetworkTest-687426125',owner_user_name='tempest-AutoAllocateNetworkTest-687426125-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:10:56Z,user_data=None,user_id='fd58a5335a8745f1b3ce1bd9a0439003',uuid=2314cf64-76a5-4383-8f2e-58228261f71b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.963 226437 DEBUG nova.network.os_vif_util [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converting VIF {"id": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "address": "fa:16:3e:c1:38:78", "network": {"id": "18c81f01-33be-49a1-a179-aecc87794f99", "bridge": "br-int", "label": "auto_allocated_network", "subnets": [{"cidr": "10.1.0.0/26", "dns": [], "gateway": {"address": "10.1.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.1.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "fdfe:381f:8400::/64", "dns": [], "gateway": {"address": "fdfe:381f:8400::1", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "fdfe:381f:8400::3c7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e6c399bf43074b81b45ca1d976cb2b18", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3fe867d7-5e", "ovs_interfaceid": "3fe867d7-5ecf-4683-85f1-5f2bdce33a78", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.964 226437 DEBUG nova.network.os_vif_util [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:38:78,bridge_name='br-int',has_traffic_filtering=True,id=3fe867d7-5ecf-4683-85f1-5f2bdce33a78,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fe867d7-5e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.964 226437 DEBUG os_vif [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:38:78,bridge_name='br-int',has_traffic_filtering=True,id=3fe867d7-5ecf-4683-85f1-5f2bdce33a78,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fe867d7-5e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.966 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.966 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3fe867d7-5e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.967 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.969 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:09 np0005592159 nova_compute[226433]: 2026-01-22 14:11:09.971 226437 INFO os_vif [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:38:78,bridge_name='br-int',has_traffic_filtering=True,id=3fe867d7-5ecf-4683-85f1-5f2bdce33a78,network=Network(18c81f01-33be-49a1-a179-aecc87794f99),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3fe867d7-5e')#033[00m
Jan 22 09:11:09 np0005592159 podman[238152]: 2026-01-22 14:11:09.996945278 +0000 UTC m=+0.049715947 container remove 7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:11:10 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.002 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0c5d2f-5924-491f-88a8-765414ae7b65]: (4, ('Thu Jan 22 02:11:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 (7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7)\n7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7\nThu Jan 22 02:11:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 (7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7)\n7c37356e3e62eb020976fe6f4640ed9266bc6872582dd6d8be1548ecce37b1d7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:10 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.003 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[45535e34-96a6-4a20-b412-71b0f7f8a792]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:10 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.004 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18c81f01-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.006 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.018 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:10 np0005592159 kernel: tap18c81f01-30: left promiscuous mode
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.020 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:10 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.022 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[d543cbe5-fe44-4cdd-9a11-295f3fa4a7a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:10 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.044 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[20fe42ec-5536-4de1-9926-ba462bea7edf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:10 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.046 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[b62390e2-5213-47c2-bc10-a8d39fb1c8b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:10 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.058 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[39ec3f00-fdb9-4f80-bfd6-090e1dbfb7ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490822, 'reachable_time': 37114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 238190, 'error': None, 'target': 'ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:10 np0005592159 systemd[1]: run-netns-ovnmeta\x2d18c81f01\x2d33be\x2d49a1\x2da179\x2daecc87794f99.mount: Deactivated successfully.
Jan 22 09:11:10 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.068 143856 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-18c81f01-33be-49a1-a179-aecc87794f99 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 22 09:11:10 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:10.069 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[b7259e92-773a-499b-b50e-ed9694a97746]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:11:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:10.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.451 226437 DEBUG nova.compute.manager [req-7195b897-e383-4f61-8192-571bd029b25f req-11d48fb6-9ffd-40a9-a522-e4e75eaa9189 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received event network-vif-unplugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.452 226437 DEBUG oslo_concurrency.lockutils [req-7195b897-e383-4f61-8192-571bd029b25f req-11d48fb6-9ffd-40a9-a522-e4e75eaa9189 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.452 226437 DEBUG oslo_concurrency.lockutils [req-7195b897-e383-4f61-8192-571bd029b25f req-11d48fb6-9ffd-40a9-a522-e4e75eaa9189 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.452 226437 DEBUG oslo_concurrency.lockutils [req-7195b897-e383-4f61-8192-571bd029b25f req-11d48fb6-9ffd-40a9-a522-e4e75eaa9189 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.452 226437 DEBUG nova.compute.manager [req-7195b897-e383-4f61-8192-571bd029b25f req-11d48fb6-9ffd-40a9-a522-e4e75eaa9189 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] No waiting events found dispatching network-vif-unplugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.452 226437 DEBUG nova.compute.manager [req-7195b897-e383-4f61-8192-571bd029b25f req-11d48fb6-9ffd-40a9-a522-e4e75eaa9189 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received event network-vif-unplugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.493 226437 INFO nova.virt.libvirt.driver [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Deleting instance files /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b_del#033[00m
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.493 226437 INFO nova.virt.libvirt.driver [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Deletion of /var/lib/nova/instances/2314cf64-76a5-4383-8f2e-58228261f71b_del complete#033[00m
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.573 226437 INFO nova.compute.manager [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Took 1.67 seconds to destroy the instance on the hypervisor.#033[00m
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.574 226437 DEBUG oslo.service.loopingcall [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.574 226437 DEBUG nova.compute.manager [-] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 22 09:11:10 np0005592159 nova_compute[226433]: 2026-01-22 14:11:10.575 226437 DEBUG nova.network.neutron [-] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 22 09:11:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:10.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:10 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:10 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:10.796+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:11 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:11.760+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:11 np0005592159 nova_compute[226433]: 2026-01-22 14:11:11.916 226437 DEBUG nova.network.neutron [-] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:11:11 np0005592159 nova_compute[226433]: 2026-01-22 14:11:11.947 226437 INFO nova.compute.manager [-] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Took 1.37 seconds to deallocate network for instance.#033[00m
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.042 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.042 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.178 226437 DEBUG oslo_concurrency.processutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.318 226437 DEBUG nova.compute.manager [req-51684e27-dc3b-4f8b-8975-aa9bdea9550b req-7ed5cb0e-153a-4b88-a447-1b37cd3d1cc7 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received event network-vif-deleted-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:11:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:12.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.585 226437 DEBUG nova.compute.manager [req-b7b8eb34-a1b2-4516-9668-1844a98b0fe2 req-958b19bc-6443-4357-a6a6-a6c21cb4bd6b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received event network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.586 226437 DEBUG oslo_concurrency.lockutils [req-b7b8eb34-a1b2-4516-9668-1844a98b0fe2 req-958b19bc-6443-4357-a6a6-a6c21cb4bd6b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.586 226437 DEBUG oslo_concurrency.lockutils [req-b7b8eb34-a1b2-4516-9668-1844a98b0fe2 req-958b19bc-6443-4357-a6a6-a6c21cb4bd6b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.586 226437 DEBUG oslo_concurrency.lockutils [req-b7b8eb34-a1b2-4516-9668-1844a98b0fe2 req-958b19bc-6443-4357-a6a6-a6c21cb4bd6b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.587 226437 DEBUG nova.compute.manager [req-b7b8eb34-a1b2-4516-9668-1844a98b0fe2 req-958b19bc-6443-4357-a6a6-a6c21cb4bd6b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] No waiting events found dispatching network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.587 226437 WARNING nova.compute.manager [req-b7b8eb34-a1b2-4516-9668-1844a98b0fe2 req-958b19bc-6443-4357-a6a6-a6c21cb4bd6b 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Received unexpected event network-vif-plugged-3fe867d7-5ecf-4683-85f1-5f2bdce33a78 for instance with vm_state deleted and task_state None.#033[00m
Jan 22 09:11:12 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:11:12 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4163491815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.605 226437 DEBUG oslo_concurrency.processutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.609 226437 DEBUG nova.compute.provider_tree [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.631 226437 DEBUG nova.scheduler.client.report [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.669 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:12.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.707 226437 INFO nova.scheduler.client.report [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Deleted allocations for instance 2314cf64-76a5-4383-8f2e-58228261f71b#033[00m
Jan 22 09:11:12 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:12.762+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:12 np0005592159 nova_compute[226433]: 2026-01-22 14:11:12.826 226437 DEBUG oslo_concurrency.lockutils [None req-1ced3354-feaf-42e4-8abb-463a176f974a fd58a5335a8745f1b3ce1bd9a0439003 e6c399bf43074b81b45ca1d976cb2b18 - - default default] Lock "2314cf64-76a5-4383-8f2e-58228261f71b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:13 np0005592159 nova_compute[226433]: 2026-01-22 14:11:13.424 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:13.753+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:13 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:14.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:14.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:14.771+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:14 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:14 np0005592159 nova_compute[226433]: 2026-01-22 14:11:14.970 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:15.587 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:11:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:15.588 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:11:15 np0005592159 nova_compute[226433]: 2026-01-22 14:11:15.588 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:15.759+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:15 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:15 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:16.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:16.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:16.715+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:16 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:17.737+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:17 np0005592159 nova_compute[226433]: 2026-01-22 14:11:17.994 226437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769091062.9939866, 0c72e43b-d26a-47b8-ab7d-739190e552a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:11:17 np0005592159 nova_compute[226433]: 2026-01-22 14:11:17.995 226437 INFO nova.compute.manager [-] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] VM Stopped (Lifecycle Event)#033[00m
Jan 22 09:11:18 np0005592159 nova_compute[226433]: 2026-01-22 14:11:18.083 226437 DEBUG nova.compute.manager [None req-f6315c38-80c4-4dec-86b4-db8b117b2dcd - - - - - -] [instance: 0c72e43b-d26a-47b8-ab7d-739190e552a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:11:18 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:11:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:18.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:11:18 np0005592159 nova_compute[226433]: 2026-01-22 14:11:18.427 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:18.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:18.708+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:19 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:19 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:19.590 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:11:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:19.716+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:19 np0005592159 nova_compute[226433]: 2026-01-22 14:11:19.973 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:20 np0005592159 podman[238221]: 2026-01-22 14:11:20.017230778 +0000 UTC m=+0.065390251 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 22 09:11:20 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:20.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:20.689+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:20.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:21 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:21 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:21.693+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:22 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:22.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:22.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:22.738+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:23 np0005592159 nova_compute[226433]: 2026-01-22 14:11:23.429 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:23.704+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:23 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:24.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:24.661+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:24 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:24 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:24.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:24 np0005592159 nova_compute[226433]: 2026-01-22 14:11:24.937 226437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769091069.9359367, 2314cf64-76a5-4383-8f2e-58228261f71b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:11:24 np0005592159 nova_compute[226433]: 2026-01-22 14:11:24.937 226437 INFO nova.compute.manager [-] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] VM Stopped (Lifecycle Event)#033[00m
Jan 22 09:11:24 np0005592159 nova_compute[226433]: 2026-01-22 14:11:24.959 226437 DEBUG nova.compute.manager [None req-80fa0f9f-2d47-4e76-8496-6222328ab9a1 - - - - - -] [instance: 2314cf64-76a5-4383-8f2e-58228261f71b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:11:24 np0005592159 nova_compute[226433]: 2026-01-22 14:11:24.975 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:25 np0005592159 nova_compute[226433]: 2026-01-22 14:11:25.591 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:25.638+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:25 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:25 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:26.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:26.595+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:26.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:26 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:27.594+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:27 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:28.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:28 np0005592159 nova_compute[226433]: 2026-01-22 14:11:28.430 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:28.594+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:28.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:28 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:29.568+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:29 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:29 np0005592159 nova_compute[226433]: 2026-01-22 14:11:29.977 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:30.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:30.532+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:30.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:30 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:30 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:30 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:31.543+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:32.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:32.530+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:32.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:33 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:33 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:33 np0005592159 nova_compute[226433]: 2026-01-22 14:11:33.432 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:33 np0005592159 nova_compute[226433]: 2026-01-22 14:11:33.546 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:33.580+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:34.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:34.554+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:34.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:34 np0005592159 nova_compute[226433]: 2026-01-22 14:11:34.980 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:35 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:35.601+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.039 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Acquiring lock "f591d61b-712e-49aa-85bd-8d222b607eb3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.039 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Lock "f591d61b-712e-49aa-85bd-8d222b607eb3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.066 226437 DEBUG nova.compute.manager [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:11:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:36 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:36 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.176 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.177 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.186 226437 DEBUG nova.virt.hardware [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.186 226437 INFO nova.compute.claims [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Claim successful on node compute-2.ctlplane.example.com#033[00m
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.390 226437 DEBUG nova.scheduler.client.report [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Refreshing inventories for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.415 226437 DEBUG nova.scheduler.client.report [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Updating ProviderTree inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.415 226437 DEBUG nova.compute.provider_tree [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 09:11:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:36.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.460 226437 DEBUG nova.scheduler.client.report [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Refreshing aggregate associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.507 226437 DEBUG nova.scheduler.client.report [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Refreshing trait associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:36.554+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:36 np0005592159 nova_compute[226433]: 2026-01-22 14:11:36.575 226437 DEBUG oslo_concurrency.processutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:36.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:11:36 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/744848052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.005 226437 DEBUG oslo_concurrency.processutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.012 226437 DEBUG nova.compute.provider_tree [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.040 226437 DEBUG nova.scheduler.client.report [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.074 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.075 226437 DEBUG nova.compute.manager [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.155 226437 DEBUG nova.compute.manager [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Jan 22 09:11:37 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.179 226437 INFO nova.virt.libvirt.driver [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.219 226437 DEBUG nova.compute.manager [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.375 226437 DEBUG nova.compute.manager [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.377 226437 DEBUG nova.virt.libvirt.driver [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.377 226437 INFO nova.virt.libvirt.driver [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Creating image(s)#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.416 226437 DEBUG nova.storage.rbd_utils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] rbd image f591d61b-712e-49aa-85bd-8d222b607eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.453 226437 DEBUG nova.storage.rbd_utils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] rbd image f591d61b-712e-49aa-85bd-8d222b607eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.491 226437 DEBUG nova.storage.rbd_utils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] rbd image f591d61b-712e-49aa-85bd-8d222b607eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.495 226437 DEBUG oslo_concurrency.processutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.581 226437 DEBUG oslo_concurrency.processutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.582 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.583 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.583 226437 DEBUG oslo_concurrency.lockutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:37.589+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.617 226437 DEBUG nova.storage.rbd_utils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] rbd image f591d61b-712e-49aa-85bd-8d222b607eb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:11:37 np0005592159 nova_compute[226433]: 2026-01-22 14:11:37.621 226437 DEBUG oslo_concurrency.processutils [None req-33fcf0db-d56b-4bd3-bcb1-267a2a73996a 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 f591d61b-712e-49aa-85bd-8d222b607eb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:38 np0005592159 podman[238415]: 2026-01-22 14:11:38.066152565 +0000 UTC m=+0.118771223 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:11:38 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:11:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:38.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:11:38 np0005592159 nova_compute[226433]: 2026-01-22 14:11:38.435 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:38 np0005592159 nova_compute[226433]: 2026-01-22 14:11:38.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:38 np0005592159 nova_compute[226433]: 2026-01-22 14:11:38.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:11:38 np0005592159 nova_compute[226433]: 2026-01-22 14:11:38.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:11:38 np0005592159 nova_compute[226433]: 2026-01-22 14:11:38.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:11:38 np0005592159 nova_compute[226433]: 2026-01-22 14:11:38.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:11:38 np0005592159 nova_compute[226433]: 2026-01-22 14:11:38.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:11:38 np0005592159 nova_compute[226433]: 2026-01-22 14:11:38.547 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:38.595+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:11:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:38.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:11:39 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:11:39 np0005592159 nova_compute[226433]: 2026-01-22 14:11:39.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:39 np0005592159 nova_compute[226433]: 2026-01-22 14:11:39.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:11:39 np0005592159 nova_compute[226433]: 2026-01-22 14:11:39.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:39 np0005592159 nova_compute[226433]: 2026-01-22 14:11:39.606 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:39 np0005592159 nova_compute[226433]: 2026-01-22 14:11:39.607 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:39 np0005592159 nova_compute[226433]: 2026-01-22 14:11:39.608 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:39 np0005592159 nova_compute[226433]: 2026-01-22 14:11:39.608 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:11:39 np0005592159 nova_compute[226433]: 2026-01-22 14:11:39.608 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:39.639+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:39 np0005592159 nova_compute[226433]: 2026-01-22 14:11:39.982 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:11:40 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2149985450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:11:40 np0005592159 nova_compute[226433]: 2026-01-22 14:11:40.053 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:40 np0005592159 nova_compute[226433]: 2026-01-22 14:11:40.257 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:11:40 np0005592159 nova_compute[226433]: 2026-01-22 14:11:40.259 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4808MB free_disk=20.951171875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:11:40 np0005592159 nova_compute[226433]: 2026-01-22 14:11:40.259 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:40 np0005592159 nova_compute[226433]: 2026-01-22 14:11:40.259 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:40 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:40 np0005592159 nova_compute[226433]: 2026-01-22 14:11:40.429 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:11:40 np0005592159 nova_compute[226433]: 2026-01-22 14:11:40.430 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:11:40 np0005592159 nova_compute[226433]: 2026-01-22 14:11:40.430 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:11:40 np0005592159 nova_compute[226433]: 2026-01-22 14:11:40.431 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=20GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:11:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:40.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:40 np0005592159 nova_compute[226433]: 2026-01-22 14:11:40.496 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:40.641+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:11:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:40.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:11:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:11:40 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/32100446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:11:40 np0005592159 nova_compute[226433]: 2026-01-22 14:11:40.967 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:11:40 np0005592159 nova_compute[226433]: 2026-01-22 14:11:40.975 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:11:41 np0005592159 nova_compute[226433]: 2026-01-22 14:11:41.014 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:11:41 np0005592159 nova_compute[226433]: 2026-01-22 14:11:41.071 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:11:41 np0005592159 nova_compute[226433]: 2026-01-22 14:11:41.072 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:41 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:41 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:41.620+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:42 np0005592159 nova_compute[226433]: 2026-01-22 14:11:42.067 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:42 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:42.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:42 np0005592159 nova_compute[226433]: 2026-01-22 14:11:42.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:42.670+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:11:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:42.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:11:43 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:43 np0005592159 nova_compute[226433]: 2026-01-22 14:11:43.437 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:43.625+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:44 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:44.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:44.603+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:11:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:44.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:11:44 np0005592159 nova_compute[226433]: 2026-01-22 14:11:44.985 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 2093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #67. Immutable memtables: 0.
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.476589) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 67
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105476628, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 2475, "num_deletes": 251, "total_data_size": 4802638, "memory_usage": 4879512, "flush_reason": "Manual Compaction"}
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #68: started
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105498290, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 68, "file_size": 3142489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34865, "largest_seqno": 37335, "table_properties": {"data_size": 3133257, "index_size": 5342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2821, "raw_key_size": 23921, "raw_average_key_size": 21, "raw_value_size": 3112819, "raw_average_value_size": 2796, "num_data_blocks": 230, "num_entries": 1113, "num_filter_entries": 1113, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769090934, "oldest_key_time": 1769090934, "file_creation_time": 1769091105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 21873 microseconds, and 7570 cpu microseconds.
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.498455) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #68: 3142489 bytes OK
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.498486) [db/memtable_list.cc:519] [default] Level-0 commit table #68 started
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.500920) [db/memtable_list.cc:722] [default] Level-0 commit table #68: memtable #1 done
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.500946) EVENT_LOG_v1 {"time_micros": 1769091105500938, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.500970) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 4791386, prev total WAL file size 4791386, number of live WAL files 2.
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000064.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.503263) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [68(3068KB)], [66(7663KB)]
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105503296, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [68], "files_L6": [66], "score": -1, "input_data_size": 10989740, "oldest_snapshot_seqno": -1}
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #69: 7643 keys, 9277398 bytes, temperature: kUnknown
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105569524, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 69, "file_size": 9277398, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9231769, "index_size": 25421, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 203073, "raw_average_key_size": 26, "raw_value_size": 9097649, "raw_average_value_size": 1190, "num_data_blocks": 983, "num_entries": 7643, "num_filter_entries": 7643, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 69, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.569822) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9277398 bytes
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.571657) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.6 rd, 139.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 7.5 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8158, records dropped: 515 output_compression: NoCompression
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.571677) EVENT_LOG_v1 {"time_micros": 1769091105571668, "job": 40, "event": "compaction_finished", "compaction_time_micros": 66348, "compaction_time_cpu_micros": 30275, "output_level": 6, "num_output_files": 1, "total_output_size": 9277398, "num_input_records": 8158, "num_output_records": 7643, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105572246, "job": 40, "event": "table_file_deletion", "file_number": 68}
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000066.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091105573454, "job": 40, "event": "table_file_deletion", "file_number": 66}
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.503209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.573509) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.573515) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.573517) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.573519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:11:45.573521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:11:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:45.615+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:46.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:46 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:46.595+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:46.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:47.182 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:47.183 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:11:47.183 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:11:47 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:47.559+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:48 np0005592159 nova_compute[226433]: 2026-01-22 14:11:48.439 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:48.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:48 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:48.600+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 09:11:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:48.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 09:11:49 np0005592159 nova_compute[226433]: 2026-01-22 14:11:49.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:11:49 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:49.616+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:49 np0005592159 nova_compute[226433]: 2026-01-22 14:11:49.987 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:50.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:50 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:50 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 2098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:50.617+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:50.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:51 np0005592159 podman[238541]: 2026-01-22 14:11:51.009445204 +0000 UTC m=+0.066658495 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:11:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:51 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:51 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:51.600+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:52.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:52 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:52.640+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:52.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:53 np0005592159 nova_compute[226433]: 2026-01-22 14:11:53.442 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:53 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:53.661+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:54.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:54 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:54.651+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:54.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:54 np0005592159 nova_compute[226433]: 2026-01-22 14:11:54.990 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:55 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:55 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 2103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:11:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:55.699+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:11:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:56.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:56.650+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:56 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:56.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:57 np0005592159 ovn_controller[133156]: 2026-01-22T14:11:57Z|00042|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 22 09:11:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:57.612+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:57 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:58 np0005592159 nova_compute[226433]: 2026-01-22 14:11:58.444 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:11:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:11:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:11:58.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:11:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:58.655+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:11:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:11:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:11:58.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:11:58 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:59 np0005592159 nova_compute[226433]: 2026-01-22 14:11:59.508 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Acquiring lock "87e798e6-6f00-4fe1-8412-75ddc9e2878e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:59 np0005592159 nova_compute[226433]: 2026-01-22 14:11:59.509 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "87e798e6-6f00-4fe1-8412-75ddc9e2878e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:59 np0005592159 nova_compute[226433]: 2026-01-22 14:11:59.540 226437 DEBUG nova.compute.manager [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:11:59 np0005592159 nova_compute[226433]: 2026-01-22 14:11:59.631 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:11:59 np0005592159 nova_compute[226433]: 2026-01-22 14:11:59.631 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:11:59 np0005592159 nova_compute[226433]: 2026-01-22 14:11:59.638 226437 DEBUG nova.virt.hardware [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:11:59 np0005592159 nova_compute[226433]: 2026-01-22 14:11:59.638 226437 INFO nova.compute.claims [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Claim successful on node compute-2.ctlplane.example.com#033[00m
Jan 22 09:11:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:11:59.647+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:11:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:59 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:11:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:11:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:11:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:11:59 np0005592159 nova_compute[226433]: 2026-01-22 14:11:59.948 226437 DEBUG oslo_concurrency.processutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:11:59 np0005592159 nova_compute[226433]: 2026-01-22 14:11:59.994 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:12:00 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1699627580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.371 226437 DEBUG oslo_concurrency.processutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.423s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.377 226437 DEBUG nova.compute.provider_tree [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.436 226437 DEBUG nova.scheduler.client.report [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.465 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.466 226437 DEBUG nova.compute.manager [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:12:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:00.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.558 226437 DEBUG nova.compute.manager [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.558 226437 DEBUG nova.network.neutron [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.590 226437 INFO nova.virt.libvirt.driver [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.643 226437 DEBUG nova.compute.manager [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:12:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:00.686+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:00.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.850 226437 DEBUG nova.compute.manager [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.851 226437 DEBUG nova.virt.libvirt.driver [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.852 226437 INFO nova.virt.libvirt.driver [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Creating image(s)#033[00m
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.885 226437 DEBUG nova.storage.rbd_utils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 87e798e6-6f00-4fe1-8412-75ddc9e2878e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.922 226437 DEBUG nova.storage.rbd_utils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 87e798e6-6f00-4fe1-8412-75ddc9e2878e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.953 226437 DEBUG nova.storage.rbd_utils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 87e798e6-6f00-4fe1-8412-75ddc9e2878e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:12:00 np0005592159 nova_compute[226433]: 2026-01-22 14:12:00.958 226437 DEBUG oslo_concurrency.processutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:12:00 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:00 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 2108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:01 np0005592159 nova_compute[226433]: 2026-01-22 14:12:01.011 226437 DEBUG oslo_concurrency.processutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:12:01 np0005592159 nova_compute[226433]: 2026-01-22 14:12:01.012 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:12:01 np0005592159 nova_compute[226433]: 2026-01-22 14:12:01.013 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:12:01 np0005592159 nova_compute[226433]: 2026-01-22 14:12:01.013 226437 DEBUG oslo_concurrency.lockutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:12:01 np0005592159 nova_compute[226433]: 2026-01-22 14:12:01.043 226437 DEBUG nova.storage.rbd_utils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] rbd image 87e798e6-6f00-4fe1-8412-75ddc9e2878e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:12:01 np0005592159 nova_compute[226433]: 2026-01-22 14:12:01.047 226437 DEBUG oslo_concurrency.processutils [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 87e798e6-6f00-4fe1-8412-75ddc9e2878e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:12:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:01 np0005592159 nova_compute[226433]: 2026-01-22 14:12:01.614 226437 DEBUG nova.network.neutron [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 22 09:12:01 np0005592159 nova_compute[226433]: 2026-01-22 14:12:01.614 226437 DEBUG nova.compute.manager [None req-74c38418-3849-43e5-816f-779a9c09559a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:12:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:01.732+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:02 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:02.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:02.725+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:12:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:02.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:12:03 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:03 np0005592159 nova_compute[226433]: 2026-01-22 14:12:03.487 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:03.773+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:04 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:04.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:04.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:04.790+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:04 np0005592159 nova_compute[226433]: 2026-01-22 14:12:04.998 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:05 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:05.793+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:06.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:06 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:06 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 2113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:06.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:06.814+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:07 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:12:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:12:07 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:12:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:07.807+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:08.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:08 np0005592159 nova_compute[226433]: 2026-01-22 14:12:08.536 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:08 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:08.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:08.771+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:09 np0005592159 podman[238867]: 2026-01-22 14:12:09.034566262 +0000 UTC m=+0.094370958 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 09:12:09 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:09.751+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:10 np0005592159 nova_compute[226433]: 2026-01-22 14:12:09.999 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:10.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:10 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:10 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 2118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:10.744+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:10.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:11 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:11.785+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:12.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:12 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:12.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:12.793+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:13 np0005592159 nova_compute[226433]: 2026-01-22 14:12:13.537 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:13 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:13.770+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:14.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:14 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:12:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:14.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:12:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:14.818+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:15 np0005592159 nova_compute[226433]: 2026-01-22 14:12:15.003 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:15 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:15 np0005592159 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 2123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:15.850+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:16.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:12:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:16.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:12:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:16.834+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:17 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:17.799+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:18.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:18 np0005592159 nova_compute[226433]: 2026-01-22 14:12:18.540 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:12:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:18.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:12:18 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:18.832+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:19 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:19 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:19.800+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:20 np0005592159 nova_compute[226433]: 2026-01-22 14:12:20.005 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:20.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:20.759+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:20.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:21 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:21 np0005592159 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 2128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:21.768+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:22 np0005592159 podman[238952]: 2026-01-22 14:12:22.045178593 +0000 UTC m=+0.096979347 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 09:12:22 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:22.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:22.736+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:23 np0005592159 nova_compute[226433]: 2026-01-22 14:12:23.543 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:23.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:23.693+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:23 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:24.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:24.662+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:24 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:24 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:25 np0005592159 nova_compute[226433]: 2026-01-22 14:12:25.008 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:25.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:25.632+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:25 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:25 np0005592159 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 2133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:26.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:26.634+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:27 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:27.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:27.650+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:28 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:28.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:28 np0005592159 nova_compute[226433]: 2026-01-22 14:12:28.546 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:28.652+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:29.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:29 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:29.646+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:30 np0005592159 nova_compute[226433]: 2026-01-22 14:12:30.011 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:30.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:30.628+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:31 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:31 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:31 np0005592159 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 2138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:31.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:31.608+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:32 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 11 ])
Jan 22 09:12:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:32.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:32.564+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:12:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:33.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:12:33 np0005592159 nova_compute[226433]: 2026-01-22 14:12:33.582 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:33.598+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:34 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:34 np0005592159 nova_compute[226433]: 2026-01-22 14:12:34.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:34.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:34.639+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:35 np0005592159 nova_compute[226433]: 2026-01-22 14:12:35.014 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:35 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:35 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:35.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:35.592+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:36 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:36 np0005592159 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 2143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:36.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:36.600+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:37 np0005592159 nova_compute[226433]: 2026-01-22 14:12:37.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:37 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:37.555+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:37.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:38 np0005592159 nova_compute[226433]: 2026-01-22 14:12:38.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:38.518+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:38.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:38 np0005592159 nova_compute[226433]: 2026-01-22 14:12:38.584 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:38 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:38 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:38 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:39 np0005592159 nova_compute[226433]: 2026-01-22 14:12:39.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:39 np0005592159 nova_compute[226433]: 2026-01-22 14:12:39.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:12:39 np0005592159 nova_compute[226433]: 2026-01-22 14:12:39.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:12:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:39.547+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:39.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:39 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:39 np0005592159 nova_compute[226433]: 2026-01-22 14:12:39.907 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:12:39 np0005592159 nova_compute[226433]: 2026-01-22 14:12:39.908 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:12:39 np0005592159 nova_compute[226433]: 2026-01-22 14:12:39.908 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:12:39 np0005592159 nova_compute[226433]: 2026-01-22 14:12:39.908 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:12:39 np0005592159 nova_compute[226433]: 2026-01-22 14:12:39.908 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:39 np0005592159 nova_compute[226433]: 2026-01-22 14:12:39.935 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:12:39 np0005592159 nova_compute[226433]: 2026-01-22 14:12:39.936 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:12:39 np0005592159 nova_compute[226433]: 2026-01-22 14:12:39.936 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:12:39 np0005592159 nova_compute[226433]: 2026-01-22 14:12:39.936 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:12:39 np0005592159 nova_compute[226433]: 2026-01-22 14:12:39.936 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:12:40 np0005592159 nova_compute[226433]: 2026-01-22 14:12:40.016 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:40 np0005592159 podman[239030]: 2026-01-22 14:12:40.049033687 +0000 UTC m=+0.108598124 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Jan 22 09:12:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:12:40 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2193478577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:12:40 np0005592159 nova_compute[226433]: 2026-01-22 14:12:40.361 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:12:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:40.522+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:40.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:40 np0005592159 nova_compute[226433]: 2026-01-22 14:12:40.544 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:12:40 np0005592159 nova_compute[226433]: 2026-01-22 14:12:40.545 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4771MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:12:40 np0005592159 nova_compute[226433]: 2026-01-22 14:12:40.546 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:12:40 np0005592159 nova_compute[226433]: 2026-01-22 14:12:40.546 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:12:40 np0005592159 nova_compute[226433]: 2026-01-22 14:12:40.640 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:12:40 np0005592159 nova_compute[226433]: 2026-01-22 14:12:40.640 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:12:40 np0005592159 nova_compute[226433]: 2026-01-22 14:12:40.640 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:12:40 np0005592159 nova_compute[226433]: 2026-01-22 14:12:40.641 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:12:40 np0005592159 nova_compute[226433]: 2026-01-22 14:12:40.641 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:12:41 np0005592159 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 2148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:41 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:41 np0005592159 nova_compute[226433]: 2026-01-22 14:12:41.117 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:12:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:41.490+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:12:41 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3654665343' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:12:41 np0005592159 nova_compute[226433]: 2026-01-22 14:12:41.522 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:12:41 np0005592159 nova_compute[226433]: 2026-01-22 14:12:41.527 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:12:41 np0005592159 nova_compute[226433]: 2026-01-22 14:12:41.542 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:12:41 np0005592159 nova_compute[226433]: 2026-01-22 14:12:41.561 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:12:41 np0005592159 nova_compute[226433]: 2026-01-22 14:12:41.562 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.016s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:12:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:41.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:42 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:42 np0005592159 nova_compute[226433]: 2026-01-22 14:12:42.169 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:42 np0005592159 nova_compute[226433]: 2026-01-22 14:12:42.170 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:42 np0005592159 nova_compute[226433]: 2026-01-22 14:12:42.170 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:42 np0005592159 nova_compute[226433]: 2026-01-22 14:12:42.170 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:12:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:42.478+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:42.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:43 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:43.463+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:43 np0005592159 nova_compute[226433]: 2026-01-22 14:12:43.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:12:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:43.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:43 np0005592159 nova_compute[226433]: 2026-01-22 14:12:43.586 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:44 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:44.419+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:12:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:44.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:12:45 np0005592159 nova_compute[226433]: 2026-01-22 14:12:45.020 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:45 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:45.469+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:45.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:46 np0005592159 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 2153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:46 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:46.486+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:46.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:47 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:12:47.183 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:12:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:12:47.184 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:12:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:12:47.184 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:12:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:47.527+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:47.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:48 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:48.482+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:48.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:48 np0005592159 nova_compute[226433]: 2026-01-22 14:12:48.589 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:49 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:49.442+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:49.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:50 np0005592159 nova_compute[226433]: 2026-01-22 14:12:50.024 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:50 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:50.429+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:50.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:51 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:51 np0005592159 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 2157 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:51.423+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:51.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:52 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:52.435+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:52.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:52 np0005592159 podman[239161]: 2026-01-22 14:12:52.993125107 +0000 UTC m=+0.057681827 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 22 09:12:53 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:53.478+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:12:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:53.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:12:53 np0005592159 nova_compute[226433]: 2026-01-22 14:12:53.590 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:54 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:54.452+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:12:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:54.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:12:54 np0005592159 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 22 09:12:55 np0005592159 nova_compute[226433]: 2026-01-22 14:12:55.028 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:55 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:55.403+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:55.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:12:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:56.418+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:56 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:56 np0005592159 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 2162 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:12:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:56.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:57.455+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:57 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:57 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 13 ])
Jan 22 09:12:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:12:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:57.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:12:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:58.450+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:12:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:12:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:12:58.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:12:58 np0005592159 nova_compute[226433]: 2026-01-22 14:12:58.593 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:12:58 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:12:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:12:59.436+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:12:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:12:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:12:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:12:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:12:59.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:12:59 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:00 np0005592159 nova_compute[226433]: 2026-01-22 14:13:00.035 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:00.427+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:00.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:00 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:00 np0005592159 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 2167 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:01.468+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:01.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:02 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:02.448+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:02.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:03 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:03.439+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:03.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:03 np0005592159 nova_compute[226433]: 2026-01-22 14:13:03.594 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:04 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:04.392+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:04.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:05 np0005592159 nova_compute[226433]: 2026-01-22 14:13:05.038 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:05.375+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:05 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:05.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:06.421+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:06 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:06 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2172 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:06.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:07.391+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:07.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:07 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:08.348+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:13:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:08.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:13:08 np0005592159 nova_compute[226433]: 2026-01-22 14:13:08.598 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:08 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:08 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:09.375+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:09.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:10 np0005592159 nova_compute[226433]: 2026-01-22 14:13:10.040 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:10 np0005592159 podman[239368]: 2026-01-22 14:13:10.215126226 +0000 UTC m=+0.095242441 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 09:13:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:10.403+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:10.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:10 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:13:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:11.354+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:11.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:11 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:11 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2177 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:13:11 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:12.348+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:12.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:13 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:13.387+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:13:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:13.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:13:13 np0005592159 nova_compute[226433]: 2026-01-22 14:13:13.600 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:14 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:14.351+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:14.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:15 np0005592159 nova_compute[226433]: 2026-01-22 14:13:15.043 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:15.304+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:13:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:15.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #70. Immutable memtables: 0.
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.775698) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 70
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195775774, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1416, "num_deletes": 256, "total_data_size": 2586040, "memory_usage": 2614016, "flush_reason": "Manual Compaction"}
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #71: started
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195802539, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 71, "file_size": 1687738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37340, "largest_seqno": 38751, "table_properties": {"data_size": 1682028, "index_size": 2850, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14642, "raw_average_key_size": 20, "raw_value_size": 1669526, "raw_average_value_size": 2351, "num_data_blocks": 124, "num_entries": 710, "num_filter_entries": 710, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091106, "oldest_key_time": 1769091106, "file_creation_time": 1769091195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 26889 microseconds, and 11408 cpu microseconds.
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.802593) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #71: 1687738 bytes OK
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.802615) [db/memtable_list.cc:519] [default] Level-0 commit table #71 started
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.850271) [db/memtable_list.cc:722] [default] Level-0 commit table #71: memtable #1 done
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.850339) EVENT_LOG_v1 {"time_micros": 1769091195850304, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.850361) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2579160, prev total WAL file size 2579160, number of live WAL files 2.
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000067.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.851221) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323537' seq:72057594037927935, type:22 .. '6C6F676D0031353039' seq:0, type:0; will stop at (end)
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [71(1648KB)], [69(9059KB)]
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195851360, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [71], "files_L6": [69], "score": -1, "input_data_size": 10965136, "oldest_snapshot_seqno": -1}
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #72: 7828 keys, 10801446 bytes, temperature: kUnknown
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195995076, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 72, "file_size": 10801446, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10753256, "index_size": 27534, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19589, "raw_key_size": 208536, "raw_average_key_size": 26, "raw_value_size": 10614504, "raw_average_value_size": 1355, "num_data_blocks": 1068, "num_entries": 7828, "num_filter_entries": 7828, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 72, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.995522) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10801446 bytes
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.998210) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 76.2 rd, 75.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.8 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(12.9) write-amplify(6.4) OK, records in: 8353, records dropped: 525 output_compression: NoCompression
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.998279) EVENT_LOG_v1 {"time_micros": 1769091195998257, "job": 42, "event": "compaction_finished", "compaction_time_micros": 143813, "compaction_time_cpu_micros": 47953, "output_level": 6, "num_output_files": 1, "total_output_size": 10801446, "num_input_records": 8353, "num_output_records": 7828, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:13:15 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091195998946, "job": 42, "event": "table_file_deletion", "file_number": 71}
Jan 22 09:13:16 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000069.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:13:16 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091196001091, "job": 42, "event": "table_file_deletion", "file_number": 69}
Jan 22 09:13:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:15.851070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:16.001191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:16.001199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:16.001202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:16.001205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:13:16.001208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:13:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:16.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:16 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:16 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2182 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:16.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:17.272+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:17 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:17.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:18.262+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:18 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:18 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:18 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:13:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:18.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:18 np0005592159 nova_compute[226433]: 2026-01-22 14:13:18.601 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:19.288+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:19 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:19.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:20 np0005592159 nova_compute[226433]: 2026-01-22 14:13:20.047 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:20.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:20 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:20.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:21.316+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:21 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:21 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2187 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:21.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:22.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:22 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:22.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:23.362+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:23 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:23 np0005592159 nova_compute[226433]: 2026-01-22 14:13:23.603 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:23.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:24 np0005592159 podman[239454]: 2026-01-22 14:13:24.012667448 +0000 UTC m=+0.063764488 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 09:13:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:24.408+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:24 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:24.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:25 np0005592159 nova_compute[226433]: 2026-01-22 14:13:25.050 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:25.361+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:25 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:25 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2192 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:25.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:26.385+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:26.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:26 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:26 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:27.372+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:27.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:27 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:28.362+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:28.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:28 np0005592159 nova_compute[226433]: 2026-01-22 14:13:28.604 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:28 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:29.380+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:29.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:29 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:30 np0005592159 nova_compute[226433]: 2026-01-22 14:13:30.052 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:30.426+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:13:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:30.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:13:30 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2197 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:31.411+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:13:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:31.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:13:31 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:31 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:32.397+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:13:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:32.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:13:32 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:33.392+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:33 np0005592159 nova_compute[226433]: 2026-01-22 14:13:33.607 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:33.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:33 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:34.344+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:34 np0005592159 nova_compute[226433]: 2026-01-22 14:13:34.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:34.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:34 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:35 np0005592159 nova_compute[226433]: 2026-01-22 14:13:35.055 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:35.353+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 09:13:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:35.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 09:13:35 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:35 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2202 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:36.391+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:36.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:36 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:37.430+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:37.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:37 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:38.399+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:38 np0005592159 nova_compute[226433]: 2026-01-22 14:13:38.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:38 np0005592159 nova_compute[226433]: 2026-01-22 14:13:38.609 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:38.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:38 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:39.374+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:39 np0005592159 nova_compute[226433]: 2026-01-22 14:13:39.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:39 np0005592159 nova_compute[226433]: 2026-01-22 14:13:39.579 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:13:39 np0005592159 nova_compute[226433]: 2026-01-22 14:13:39.579 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:13:39 np0005592159 nova_compute[226433]: 2026-01-22 14:13:39.580 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:13:39 np0005592159 nova_compute[226433]: 2026-01-22 14:13:39.580 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:13:39 np0005592159 nova_compute[226433]: 2026-01-22 14:13:39.581 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:13:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:39.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:39 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:13:39 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/914354036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:13:39 np0005592159 nova_compute[226433]: 2026-01-22 14:13:39.984 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.403s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.058 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.160 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.161 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4781MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.161 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.161 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:13:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:40.342+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.439 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.439 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.439 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.439 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.440 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.514 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:13:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:13:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:40.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:13:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:13:40 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3392952699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.923 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.929 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:13:40 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:40 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2207 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.963 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.966 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:13:40 np0005592159 nova_compute[226433]: 2026-01-22 14:13:40.966 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:13:41 np0005592159 podman[239578]: 2026-01-22 14:13:41.025665757 +0000 UTC m=+0.087322391 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 09:13:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:41.350+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:41.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:41 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:41 np0005592159 nova_compute[226433]: 2026-01-22 14:13:41.968 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:41 np0005592159 nova_compute[226433]: 2026-01-22 14:13:41.969 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:41 np0005592159 nova_compute[226433]: 2026-01-22 14:13:41.969 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:13:41 np0005592159 nova_compute[226433]: 2026-01-22 14:13:41.969 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:13:42 np0005592159 nova_compute[226433]: 2026-01-22 14:13:42.017 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:13:42 np0005592159 nova_compute[226433]: 2026-01-22 14:13:42.017 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:13:42 np0005592159 nova_compute[226433]: 2026-01-22 14:13:42.017 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:13:42 np0005592159 nova_compute[226433]: 2026-01-22 14:13:42.017 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:13:42 np0005592159 nova_compute[226433]: 2026-01-22 14:13:42.018 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:42 np0005592159 nova_compute[226433]: 2026-01-22 14:13:42.018 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:42.317+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:42 np0005592159 nova_compute[226433]: 2026-01-22 14:13:42.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:42 np0005592159 nova_compute[226433]: 2026-01-22 14:13:42.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:13:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:42.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:42 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:43.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:13:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:43.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:13:43 np0005592159 nova_compute[226433]: 2026-01-22 14:13:43.646 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:44 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:44.339+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:44 np0005592159 nova_compute[226433]: 2026-01-22 14:13:44.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:44.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:45 np0005592159 nova_compute[226433]: 2026-01-22 14:13:45.061 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:45 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:45.389+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:45.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:46 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:46 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2212 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:46.384+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:46.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:13:47.185 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:13:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:13:47.185 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:13:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:13:47.185 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:13:47 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:47.370+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:47.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:48 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:48.355+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:48 np0005592159 nova_compute[226433]: 2026-01-22 14:13:48.647 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:48.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:49 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:49.345+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:49.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:50 np0005592159 nova_compute[226433]: 2026-01-22 14:13:50.063 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:50.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:50 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:13:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:50.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:13:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:51.329+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:51 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:51 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2217 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:51.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:52.366+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:52 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:52.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:53.378+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:53 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:53.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:53 np0005592159 nova_compute[226433]: 2026-01-22 14:13:53.650 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:54.329+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:54 np0005592159 nova_compute[226433]: 2026-01-22 14:13:54.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:13:54 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:54.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:54 np0005592159 podman[239660]: 2026-01-22 14:13:54.979009418 +0000 UTC m=+0.044088604 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 09:13:55 np0005592159 nova_compute[226433]: 2026-01-22 14:13:55.066 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:55.284+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:55.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:55 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:55 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:55 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2222 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:13:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:13:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:56.238+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:56 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:56.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:57.210+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:57.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:57 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:58.258+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:58 np0005592159 nova_compute[226433]: 2026-01-22 14:13:58.651 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:13:58 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:13:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:13:58.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:13:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:13:59.264+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:13:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:13:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:13:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:13:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:13:59.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:13:59 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:00 np0005592159 nova_compute[226433]: 2026-01-22 14:14:00.069 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:00.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:00 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:00 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2227 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:00.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:01.225+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:01.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:01 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:02.203+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:02.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:02 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:03.157+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:03 np0005592159 nova_compute[226433]: 2026-01-22 14:14:03.654 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:03.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:03 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:04.206+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:04.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:05 np0005592159 nova_compute[226433]: 2026-01-22 14:14:05.072 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:05 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:05.251+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:05.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:06 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:06 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2232 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:06.295+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:06.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:07 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:07.344+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:07.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:08 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:08.376+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:08 np0005592159 nova_compute[226433]: 2026-01-22 14:14:08.656 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:08.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:09 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:09.382+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:09.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:10 np0005592159 nova_compute[226433]: 2026-01-22 14:14:10.074 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:10 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:10.404+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:10.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:11 np0005592159 podman[239711]: 2026-01-22 14:14:11.253243481 +0000 UTC m=+0.112547467 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 09:14:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:11.435+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:11.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:11 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:11 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2237 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:12.479+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:12.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:12 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:12 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:13.502+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:13 np0005592159 nova_compute[226433]: 2026-01-22 14:14:13.659 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:13.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:13 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:14.462+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:14.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:14 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:15 np0005592159 nova_compute[226433]: 2026-01-22 14:14:15.076 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:15.442+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:15.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:15 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:15 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2242 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:16.420+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:16.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:17 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:17.389+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:17.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:18 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:18.412+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:18 np0005592159 nova_compute[226433]: 2026-01-22 14:14:18.661 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:18.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:19 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:19.384+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:19.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:20 np0005592159 nova_compute[226433]: 2026-01-22 14:14:20.078 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:20 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:14:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:14:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:20.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:20.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:21 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:21 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:21.398+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:21.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:22 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:22.441+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:22.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:23 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:23.408+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:23.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:23 np0005592159 nova_compute[226433]: 2026-01-22 14:14:23.715 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:24 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:24.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:24.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:25 np0005592159 nova_compute[226433]: 2026-01-22 14:14:25.080 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:25 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:25.391+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:25.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:25 np0005592159 podman[239901]: 2026-01-22 14:14:25.995278946 +0000 UTC m=+0.057464005 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:14:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:26 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:26 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2252 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:14:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:26.415+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:26.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:27 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:27.376+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:27.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:28.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:28 np0005592159 nova_compute[226433]: 2026-01-22 14:14:28.730 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:29 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:29.382+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:29.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:30 np0005592159 nova_compute[226433]: 2026-01-22 14:14:30.083 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:30 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:30 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:30.381+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:30.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:31 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:31 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2257 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:31.373+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:14:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:31.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:14:32 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:32.341+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:32.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #73. Immutable memtables: 0.
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.288579) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 73
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273288606, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1309, "num_deletes": 251, "total_data_size": 2308696, "memory_usage": 2338600, "flush_reason": "Manual Compaction"}
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #74: started
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273298785, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 74, "file_size": 1505369, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38756, "largest_seqno": 40060, "table_properties": {"data_size": 1500082, "index_size": 2555, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13785, "raw_average_key_size": 20, "raw_value_size": 1488459, "raw_average_value_size": 2251, "num_data_blocks": 110, "num_entries": 661, "num_filter_entries": 661, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091196, "oldest_key_time": 1769091196, "file_creation_time": 1769091273, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 10291 microseconds, and 4055 cpu microseconds.
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.298861) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #74: 1505369 bytes OK
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.298893) [db/memtable_list.cc:519] [default] Level-0 commit table #74 started
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.301455) [db/memtable_list.cc:722] [default] Level-0 commit table #74: memtable #1 done
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.301481) EVENT_LOG_v1 {"time_micros": 1769091273301473, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.301506) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2302318, prev total WAL file size 2302318, number of live WAL files 2.
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000070.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.302714) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [74(1470KB)], [72(10MB)]
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273302761, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [74], "files_L6": [72], "score": -1, "input_data_size": 12306815, "oldest_snapshot_seqno": -1}
Jan 22 09:14:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:33.320+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #75: 7972 keys, 10595837 bytes, temperature: kUnknown
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273376102, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 75, "file_size": 10595837, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10546997, "index_size": 27800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19973, "raw_key_size": 212751, "raw_average_key_size": 26, "raw_value_size": 10405774, "raw_average_value_size": 1305, "num_data_blocks": 1075, "num_entries": 7972, "num_filter_entries": 7972, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091273, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 75, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.376683) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 10595837 bytes
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.379278) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.4 rd, 144.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 10.3 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(15.2) write-amplify(7.0) OK, records in: 8489, records dropped: 517 output_compression: NoCompression
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.379332) EVENT_LOG_v1 {"time_micros": 1769091273379295, "job": 44, "event": "compaction_finished", "compaction_time_micros": 73522, "compaction_time_cpu_micros": 30833, "output_level": 6, "num_output_files": 1, "total_output_size": 10595837, "num_input_records": 8489, "num_output_records": 7972, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273379958, "job": 44, "event": "table_file_deletion", "file_number": 74}
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000072.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091273383876, "job": 44, "event": "table_file_deletion", "file_number": 72}
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.302589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.384039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.384049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.384052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.384056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:14:33.384059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:14:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:33.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:33 np0005592159 nova_compute[226433]: 2026-01-22 14:14:33.734 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:34.282+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:34 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:34.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:35 np0005592159 nova_compute[226433]: 2026-01-22 14:14:35.086 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:35.242+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:35 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:35 np0005592159 nova_compute[226433]: 2026-01-22 14:14:35.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:35.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:36.292+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:36 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:36 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2262 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:36.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:37.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:37 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:37.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:38.332+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:38 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:38 np0005592159 nova_compute[226433]: 2026-01-22 14:14:38.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:38 np0005592159 nova_compute[226433]: 2026-01-22 14:14:38.734 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:38.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:39.301+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:39 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:39.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:40 np0005592159 nova_compute[226433]: 2026-01-22 14:14:40.089 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:40.281+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:40 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:40 np0005592159 nova_compute[226433]: 2026-01-22 14:14:40.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:40.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:41.289+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:41 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:41 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:41 np0005592159 nova_compute[226433]: 2026-01-22 14:14:41.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:41 np0005592159 nova_compute[226433]: 2026-01-22 14:14:41.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:14:41 np0005592159 nova_compute[226433]: 2026-01-22 14:14:41.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:14:41 np0005592159 nova_compute[226433]: 2026-01-22 14:14:41.644 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:14:41 np0005592159 nova_compute[226433]: 2026-01-22 14:14:41.644 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:14:41 np0005592159 nova_compute[226433]: 2026-01-22 14:14:41.644 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:14:41 np0005592159 nova_compute[226433]: 2026-01-22 14:14:41.644 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:14:41 np0005592159 nova_compute[226433]: 2026-01-22 14:14:41.645 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:41 np0005592159 nova_compute[226433]: 2026-01-22 14:14:41.645 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:41.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:41 np0005592159 nova_compute[226433]: 2026-01-22 14:14:41.776 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:14:41 np0005592159 nova_compute[226433]: 2026-01-22 14:14:41.777 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:14:41 np0005592159 nova_compute[226433]: 2026-01-22 14:14:41.777 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:14:41 np0005592159 nova_compute[226433]: 2026-01-22 14:14:41.778 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:14:41 np0005592159 nova_compute[226433]: 2026-01-22 14:14:41.779 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:14:42 np0005592159 podman[240041]: 2026-01-22 14:14:42.073224279 +0000 UTC m=+0.131335255 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 09:14:42 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:14:42 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1519008089' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:14:42 np0005592159 nova_compute[226433]: 2026-01-22 14:14:42.246 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:14:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:42.339+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:42 np0005592159 nova_compute[226433]: 2026-01-22 14:14:42.476 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:14:42 np0005592159 nova_compute[226433]: 2026-01-22 14:14:42.477 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4757MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:14:42 np0005592159 nova_compute[226433]: 2026-01-22 14:14:42.477 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:14:42 np0005592159 nova_compute[226433]: 2026-01-22 14:14:42.478 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:14:42 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:42 np0005592159 nova_compute[226433]: 2026-01-22 14:14:42.574 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:14:42 np0005592159 nova_compute[226433]: 2026-01-22 14:14:42.575 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:14:42 np0005592159 nova_compute[226433]: 2026-01-22 14:14:42.575 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:14:42 np0005592159 nova_compute[226433]: 2026-01-22 14:14:42.575 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:14:42 np0005592159 nova_compute[226433]: 2026-01-22 14:14:42.575 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:14:42 np0005592159 nova_compute[226433]: 2026-01-22 14:14:42.656 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:14:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:42.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:14:43 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2614008316' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:14:43 np0005592159 nova_compute[226433]: 2026-01-22 14:14:43.113 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:14:43 np0005592159 nova_compute[226433]: 2026-01-22 14:14:43.124 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:14:43 np0005592159 nova_compute[226433]: 2026-01-22 14:14:43.151 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:14:43 np0005592159 nova_compute[226433]: 2026-01-22 14:14:43.152 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:14:43 np0005592159 nova_compute[226433]: 2026-01-22 14:14:43.152 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:14:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:43.378+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:43 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:43.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:43 np0005592159 nova_compute[226433]: 2026-01-22 14:14:43.774 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:44 np0005592159 nova_compute[226433]: 2026-01-22 14:14:44.147 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:44 np0005592159 nova_compute[226433]: 2026-01-22 14:14:44.148 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:44 np0005592159 nova_compute[226433]: 2026-01-22 14:14:44.148 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:14:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:44.363+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:44 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:44.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:45 np0005592159 nova_compute[226433]: 2026-01-22 14:14:45.091 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:45.412+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:45 np0005592159 nova_compute[226433]: 2026-01-22 14:14:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:14:45 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:45.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:46.368+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:46 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:46 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:46.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:14:47.185 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:14:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:14:47.186 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:14:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:14:47.186 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:14:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:47.412+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:47 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:47.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:48.373+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:48 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:48 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:48 np0005592159 nova_compute[226433]: 2026-01-22 14:14:48.775 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:48.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:49.365+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:49 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:49.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:50 np0005592159 nova_compute[226433]: 2026-01-22 14:14:50.094 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:50.367+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:50 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:50 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2277 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:50.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:51.385+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:51.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:51 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:52.400+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:52 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:52.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:53.382+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:53.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:53 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:53 np0005592159 nova_compute[226433]: 2026-01-22 14:14:53.806 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:54.406+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:14:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:54.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:14:55 np0005592159 nova_compute[226433]: 2026-01-22 14:14:55.096 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:55.433+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:55 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:55.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:14:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:56.447+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:56 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:56 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2282 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:14:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:14:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:56.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:14:56 np0005592159 podman[240159]: 2026-01-22 14:14:56.985633443 +0000 UTC m=+0.050749444 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:14:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:57.473+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:57 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:57.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:58.445+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:58 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:58 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:58 np0005592159 nova_compute[226433]: 2026-01-22 14:14:58.810 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:14:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:14:58.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:14:59.438+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:14:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:14:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:14:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:14:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:14:59.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:14:59 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:00 np0005592159 nova_compute[226433]: 2026-01-22 14:15:00.098 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:00.398+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:00 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:00 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2287 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:00.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:01.415+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:01.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:01 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:02.397+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:02 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:02.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:03.366+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:03.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:03 np0005592159 nova_compute[226433]: 2026-01-22 14:15:03.812 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:03 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:04.375+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:04.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:04 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:05 np0005592159 nova_compute[226433]: 2026-01-22 14:15:05.101 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:05.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:05.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:06 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:06 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2292 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:06.343+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:06.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:07 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:07.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:07.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:08.319+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:08 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:08 np0005592159 nova_compute[226433]: 2026-01-22 14:15:08.853 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:08.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:09.279+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:09 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:09.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:10 np0005592159 nova_compute[226433]: 2026-01-22 14:15:10.105 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:10.269+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:10 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:10.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:11.222+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:11 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:11 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2297 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:11.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:12.186+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:12 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:12.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:13 np0005592159 podman[240237]: 2026-01-22 14:15:13.052493455 +0000 UTC m=+0.111986531 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 22 09:15:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:13.187+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:13 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:13.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:13 np0005592159 nova_compute[226433]: 2026-01-22 14:15:13.855 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:14.201+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:14 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:14.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:15 np0005592159 nova_compute[226433]: 2026-01-22 14:15:15.109 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:15.223+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:15 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:15.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:16.253+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:16 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:16 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:16.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:17.232+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:17 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:17.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:18.191+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:18 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:18 np0005592159 nova_compute[226433]: 2026-01-22 14:15:18.855 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:18.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:19.228+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:19 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:19 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:19.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:20 np0005592159 nova_compute[226433]: 2026-01-22 14:15:20.112 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:20.198+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:20 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:20 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:20.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:21.237+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:21.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:22 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:22.258+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:22.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:23.277+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:23 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:23.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:23 np0005592159 nova_compute[226433]: 2026-01-22 14:15:23.858 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:24.263+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:24 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:24.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:25 np0005592159 nova_compute[226433]: 2026-01-22 14:15:25.115 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:25.290+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:25.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:25 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:26.249+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:26 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:26 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2312 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:26 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:26.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:27.218+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:27.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:28 np0005592159 podman[240403]: 2026-01-22 14:15:28.002566158 +0000 UTC m=+0.063296923 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:15:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:28 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:15:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:15:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:28.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:28 np0005592159 nova_compute[226433]: 2026-01-22 14:15:28.864 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:28.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:15:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 7438 writes, 40K keys, 7438 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s#012Cumulative WAL: 7438 writes, 7438 syncs, 1.00 writes per sync, written: 0.07 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1865 writes, 9617 keys, 1865 commit groups, 1.0 writes per commit group, ingest: 16.51 MB, 0.03 MB/s#012Interval WAL: 1865 writes, 1865 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     88.6      0.50              0.14        22    0.023       0      0       0.0       0.0#012  L6      1/0   10.10 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    130.7    109.8      1.66              0.53        21    0.079    135K    12K       0.0       0.0#012 Sum      1/0   10.10 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1    100.2    104.9      2.16              0.67        43    0.050    135K    12K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.8    105.1    107.7      0.62              0.22        12    0.052     48K   4092       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    130.7    109.8      1.66              0.53        21    0.079    135K    12K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     89.2      0.50              0.14        21    0.024       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.044, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.22 GB write, 0.09 MB/s write, 0.21 GB read, 0.09 MB/s read, 2.2 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 23.59 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000359 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1254,22.65 MB,7.44971%) FilterBlock(43,388.92 KB,0.124936%) IndexBlock(43,580.58 KB,0.186504%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:15:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:29.205+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:29.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:29 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:30 np0005592159 nova_compute[226433]: 2026-01-22 14:15:30.118 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:30.168+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:30.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:31.215+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:31 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:31.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:32.220+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:32 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:32 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:32.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:32 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:32 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:33.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:33.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:33 np0005592159 nova_compute[226433]: 2026-01-22 14:15:33.866 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:34 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:34.296+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:34.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:35 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:35 np0005592159 nova_compute[226433]: 2026-01-22 14:15:35.120 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:35.268+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:35 np0005592159 nova_compute[226433]: 2026-01-22 14:15:35.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:35.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:36 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:36.233+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:36.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:37.228+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:37 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:37 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:37.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:38.181+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:38 np0005592159 nova_compute[226433]: 2026-01-22 14:15:38.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:38 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:38 np0005592159 nova_compute[226433]: 2026-01-22 14:15:38.907 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:38.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:39.211+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:39 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:39 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:39.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:40 np0005592159 nova_compute[226433]: 2026-01-22 14:15:40.121 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:40.216+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:40 np0005592159 nova_compute[226433]: 2026-01-22 14:15:40.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:40 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:40.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:41.230+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:41.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:42 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:42 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:15:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:42.276+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:42 np0005592159 nova_compute[226433]: 2026-01-22 14:15:42.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:42.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:43.320+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.540 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.541 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.541 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.541 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.542 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.542 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.573 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.574 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.574 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.574 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.575 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:15:43 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:43.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:43 np0005592159 nova_compute[226433]: 2026-01-22 14:15:43.908 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:15:44 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3600868494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:15:44 np0005592159 nova_compute[226433]: 2026-01-22 14:15:44.020 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:15:44 np0005592159 podman[240555]: 2026-01-22 14:15:44.060392457 +0000 UTC m=+0.110366558 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:15:44 np0005592159 nova_compute[226433]: 2026-01-22 14:15:44.197 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:15:44 np0005592159 nova_compute[226433]: 2026-01-22 14:15:44.198 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4798MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:15:44 np0005592159 nova_compute[226433]: 2026-01-22 14:15:44.198 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:15:44 np0005592159 nova_compute[226433]: 2026-01-22 14:15:44.199 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:15:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:44.369+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:44 np0005592159 nova_compute[226433]: 2026-01-22 14:15:44.607 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:15:44 np0005592159 nova_compute[226433]: 2026-01-22 14:15:44.608 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:15:44 np0005592159 nova_compute[226433]: 2026-01-22 14:15:44.608 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:15:44 np0005592159 nova_compute[226433]: 2026-01-22 14:15:44.608 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:15:44 np0005592159 nova_compute[226433]: 2026-01-22 14:15:44.608 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:15:44 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:44 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:44 np0005592159 nova_compute[226433]: 2026-01-22 14:15:44.797 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:15:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:44.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:45 np0005592159 nova_compute[226433]: 2026-01-22 14:15:45.123 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:15:45 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3159148396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:15:45 np0005592159 nova_compute[226433]: 2026-01-22 14:15:45.218 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:15:45 np0005592159 nova_compute[226433]: 2026-01-22 14:15:45.226 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:15:45 np0005592159 nova_compute[226433]: 2026-01-22 14:15:45.357 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:15:45 np0005592159 nova_compute[226433]: 2026-01-22 14:15:45.361 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:15:45 np0005592159 nova_compute[226433]: 2026-01-22 14:15:45.362 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:15:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:45.381+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:45 np0005592159 nova_compute[226433]: 2026-01-22 14:15:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:45 np0005592159 nova_compute[226433]: 2026-01-22 14:15:45.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:15:45 np0005592159 nova_compute[226433]: 2026-01-22 14:15:45.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:45 np0005592159 nova_compute[226433]: 2026-01-22 14:15:45.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 09:15:45 np0005592159 nova_compute[226433]: 2026-01-22 14:15:45.602 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 09:15:45 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:45.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:46.373+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:46 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2337 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:46 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:46.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:15:47.186 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:15:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:15:47.187 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:15:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:15:47.187 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:15:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:47.396+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:47 np0005592159 nova_compute[226433]: 2026-01-22 14:15:47.603 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:47.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:47 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:48.369+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:48 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:48 np0005592159 nova_compute[226433]: 2026-01-22 14:15:48.961 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:48.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:49.355+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:49 np0005592159 nova_compute[226433]: 2026-01-22 14:15:49.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:49 np0005592159 nova_compute[226433]: 2026-01-22 14:15:49.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 09:15:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:49.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:50 np0005592159 nova_compute[226433]: 2026-01-22 14:15:50.131 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:50 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:50.372+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:51.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:51.366+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:51.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:51 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:52.396+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:52 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:52 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2342 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:15:52 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:53.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:53.409+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:53.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:53 np0005592159 nova_compute[226433]: 2026-01-22 14:15:53.962 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:54 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:54.381+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:55.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:55 np0005592159 nova_compute[226433]: 2026-01-22 14:15:55.132 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:55 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:55.358+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:55 np0005592159 nova_compute[226433]: 2026-01-22 14:15:55.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:55.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:55 np0005592159 nova_compute[226433]: 2026-01-22 14:15:55.956 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:55 np0005592159 nova_compute[226433]: 2026-01-22 14:15:55.976 226437 WARNING nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] While synchronizing instance power states, found 3 instances in the database and 0 instances on the hypervisor.#033[00m
Jan 22 09:15:55 np0005592159 nova_compute[226433]: 2026-01-22 14:15:55.977 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Sync already in progress for e0e74330-96df-479f-8baf-53fbd2ccba91 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10266#033[00m
Jan 22 09:15:55 np0005592159 nova_compute[226433]: 2026-01-22 14:15:55.977 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Triggering sync for uuid f591d61b-712e-49aa-85bd-8d222b607eb3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 22 09:15:55 np0005592159 nova_compute[226433]: 2026-01-22 14:15:55.977 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Triggering sync for uuid 87e798e6-6f00-4fe1-8412-75ddc9e2878e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 22 09:15:55 np0005592159 nova_compute[226433]: 2026-01-22 14:15:55.977 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "f591d61b-712e-49aa-85bd-8d222b607eb3" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:15:55 np0005592159 nova_compute[226433]: 2026-01-22 14:15:55.978 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "87e798e6-6f00-4fe1-8412-75ddc9e2878e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:15:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:15:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:56.327+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:56 np0005592159 nova_compute[226433]: 2026-01-22 14:15:56.532 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:15:56 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:57.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:57.293+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:15:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:57.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:15:57 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:57 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:58.268+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:59 np0005592159 nova_compute[226433]: 2026-01-22 14:15:59.003 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:15:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:15:59.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:15:59 np0005592159 podman[240664]: 2026-01-22 14:15:59.026444663 +0000 UTC m=+0.091757364 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 09:15:59 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 15 ])
Jan 22 09:15:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:15:59.243+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:15:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:15:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:15:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:15:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:15:59.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:00 np0005592159 nova_compute[226433]: 2026-01-22 14:16:00.176 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:00.208+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:00 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:16:00.411 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:16:00 np0005592159 nova_compute[226433]: 2026-01-22 14:16:00.411 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:00 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:16:00.412 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:16:00 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:01.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:01.210+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:01.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:01 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:01 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 2347 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:02.225+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:03.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:03 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:03 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:03.195+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:03.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:04 np0005592159 nova_compute[226433]: 2026-01-22 14:16:04.007 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:04.221+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:04 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:05.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:05.172+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:05 np0005592159 nova_compute[226433]: 2026-01-22 14:16:05.178 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:16:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:05.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:16:05 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:06.188+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:07.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:07.195+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:07 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:07 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:07 np0005592159 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 2352 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:07 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:16:07.413 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:16:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:07.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:08.184+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:08 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:09.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:09 np0005592159 nova_compute[226433]: 2026-01-22 14:16:09.050 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:09.151+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:09.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:10.216+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:10 np0005592159 nova_compute[226433]: 2026-01-22 14:16:10.217 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:10 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:11.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:11.211+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #76. Immutable memtables: 0.
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.793839) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 76
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371793875, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1487, "num_deletes": 251, "total_data_size": 2780669, "memory_usage": 2818296, "flush_reason": "Manual Compaction"}
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #77: started
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371823670, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 77, "file_size": 1150158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40065, "largest_seqno": 41547, "table_properties": {"data_size": 1145332, "index_size": 2030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14943, "raw_average_key_size": 21, "raw_value_size": 1133865, "raw_average_value_size": 1652, "num_data_blocks": 88, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091274, "oldest_key_time": 1769091274, "file_creation_time": 1769091371, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 29903 microseconds, and 3542 cpu microseconds.
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.823734) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #77: 1150158 bytes OK
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.823760) [db/memtable_list.cc:519] [default] Level-0 commit table #77 started
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.826098) [db/memtable_list.cc:722] [default] Level-0 commit table #77: memtable #1 done
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.826123) EVENT_LOG_v1 {"time_micros": 1769091371826115, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.826147) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2773549, prev total WAL file size 2773549, number of live WAL files 2.
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000073.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.827542) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323535' seq:0, type:0; will stop at (end)
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [77(1123KB)], [75(10MB)]
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371827577, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [77], "files_L6": [75], "score": -1, "input_data_size": 11745995, "oldest_snapshot_seqno": -1}
Jan 22 09:16:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:11.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #78: 8186 keys, 8509468 bytes, temperature: kUnknown
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091371904584, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 78, "file_size": 8509468, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8463046, "index_size": 24870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20485, "raw_key_size": 218086, "raw_average_key_size": 26, "raw_value_size": 8321860, "raw_average_value_size": 1016, "num_data_blocks": 953, "num_entries": 8186, "num_filter_entries": 8186, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091371, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 78, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:16:11 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:16:12 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.904843) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8509468 bytes
Jan 22 09:16:12 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:12.000100) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.4 rd, 110.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(17.6) write-amplify(7.4) OK, records in: 8658, records dropped: 472 output_compression: NoCompression
Jan 22 09:16:12 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:12.000144) EVENT_LOG_v1 {"time_micros": 1769091372000127, "job": 46, "event": "compaction_finished", "compaction_time_micros": 77093, "compaction_time_cpu_micros": 21170, "output_level": 6, "num_output_files": 1, "total_output_size": 8509468, "num_input_records": 8658, "num_output_records": 8186, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:16:12 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:16:12 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091372000720, "job": 46, "event": "table_file_deletion", "file_number": 77}
Jan 22 09:16:12 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000075.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:16:12 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091372003600, "job": 46, "event": "table_file_deletion", "file_number": 75}
Jan 22 09:16:12 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:11.827430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:12 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:12.003770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:12 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:12.003776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:12 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:12.003779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:12 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:12.003782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:12 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:16:12.003785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:16:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:12.184+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:13.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:13.220+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:13 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:13 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:13 np0005592159 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 2357 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:13.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:14 np0005592159 nova_compute[226433]: 2026-01-22 14:16:14.052 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:14.269+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:14 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:14 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:14 np0005592159 podman[240740]: 2026-01-22 14:16:14.568077719 +0000 UTC m=+0.096385058 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:16:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:15.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:15 np0005592159 nova_compute[226433]: 2026-01-22 14:16:15.220 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:15.286+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:15.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:16 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:16 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:16.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:17.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:17.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:17 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:17 np0005592159 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 2362 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:17.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:18.277+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:16:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3577899950' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:16:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:16:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3577899950' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:16:18 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:18 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:19 np0005592159 nova_compute[226433]: 2026-01-22 14:16:19.055 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:19.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:19.308+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:19.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:20 np0005592159 nova_compute[226433]: 2026-01-22 14:16:20.222 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:20.317+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:16:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:21.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:16:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:21.282+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:21 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:21 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:21 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:21 np0005592159 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 2367 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:21.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:22.298+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:22 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:23.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:23.280+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:23.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:24 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:24 np0005592159 nova_compute[226433]: 2026-01-22 14:16:24.057 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:24.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:25.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:25 np0005592159 nova_compute[226433]: 2026-01-22 14:16:25.223 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:25.298+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:25 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:25.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:26.268+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:27.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:27 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:27.286+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:27.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:28.237+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:28 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:28 np0005592159 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 2372 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:28 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:29 np0005592159 nova_compute[226433]: 2026-01-22 14:16:29.060 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:29.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:29.258+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:29.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:29 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 14 ])
Jan 22 09:16:29 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:29 np0005592159 podman[240775]: 2026-01-22 14:16:29.990819136 +0000 UTC m=+0.056871366 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 09:16:30 np0005592159 nova_compute[226433]: 2026-01-22 14:16:30.225 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:30.285+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:31 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:16:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:31.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:16:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:16:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.5 total, 600.0 interval#012Cumulative writes: 6977 writes, 27K keys, 6977 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6977 writes, 1551 syncs, 4.50 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1066 writes, 3434 keys, 1066 commit groups, 1.0 writes per commit group, ingest: 3.16 MB, 0.01 MB/s#012Interval WAL: 1066 writes, 439 syncs, 2.43 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:16:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:31.236+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:31.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:32 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:32 np0005592159 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 2377 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:32.276+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:33.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:33.240+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:33 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:33.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:34 np0005592159 nova_compute[226433]: 2026-01-22 14:16:34.061 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:34.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:34 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:16:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:35.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:16:35 np0005592159 nova_compute[226433]: 2026-01-22 14:16:35.227 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:35.303+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:35 np0005592159 nova_compute[226433]: 2026-01-22 14:16:35.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:35.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:36 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:36 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:36.305+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:37.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:37.262+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:37 np0005592159 nova_compute[226433]: 2026-01-22 14:16:37.288 226437 DEBUG oslo_concurrency.lockutils [None req-dec0213c-d0ec-412c-9228-b640587c2a19 6e90a287840e40e0a5581b46982835a9 7a07cccb6a794189bf178665decf13c8 - - default default] Acquiring lock "f591d61b-712e-49aa-85bd-8d222b607eb3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:16:37 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:37 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2387 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:16:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:37.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:16:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:38.253+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:38 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:38 np0005592159 ovn_controller[133156]: 2026-01-22T14:16:38Z|00043|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 22 09:16:39 np0005592159 nova_compute[226433]: 2026-01-22 14:16:39.063 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:39.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:39.285+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:39.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:39 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:40 np0005592159 nova_compute[226433]: 2026-01-22 14:16:40.231 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:40.281+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:40 np0005592159 nova_compute[226433]: 2026-01-22 14:16:40.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:40 np0005592159 nova_compute[226433]: 2026-01-22 14:16:40.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:41.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:41 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:41 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:41.279+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:41.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:42.244+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:42 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:43.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:43.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:43 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2392 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:43 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:43.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:44 np0005592159 nova_compute[226433]: 2026-01-22 14:16:44.065 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:44.277+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:44 np0005592159 nova_compute[226433]: 2026-01-22 14:16:44.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:44 np0005592159 nova_compute[226433]: 2026-01-22 14:16:44.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:44 np0005592159 nova_compute[226433]: 2026-01-22 14:16:44.515 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:16:44 np0005592159 nova_compute[226433]: 2026-01-22 14:16:44.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:44 np0005592159 nova_compute[226433]: 2026-01-22 14:16:44.536 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:16:44 np0005592159 nova_compute[226433]: 2026-01-22 14:16:44.537 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:16:44 np0005592159 nova_compute[226433]: 2026-01-22 14:16:44.537 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:16:44 np0005592159 nova_compute[226433]: 2026-01-22 14:16:44.537 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:16:44 np0005592159 nova_compute[226433]: 2026-01-22 14:16:44.537 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:16:44 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:16:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:16:44 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:16:45 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3043854192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:16:45 np0005592159 podman[241004]: 2026-01-22 14:16:45.041159018 +0000 UTC m=+0.094523443 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.053 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:16:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:45.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.223 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.225 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4799MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.225 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.225 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:16:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:45.283+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.290 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.340 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.340 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.340 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.341 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.341 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.355 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing inventories for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.369 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating ProviderTree inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.369 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.384 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing aggregate associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.406 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing trait associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.495 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:16:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:45.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:16:45 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1336531881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.929 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.935 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.953 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.976 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:16:45 np0005592159 nova_compute[226433]: 2026-01-22 14:16:45.976 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:16:46 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:46.303+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:46 np0005592159 nova_compute[226433]: 2026-01-22 14:16:46.978 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:46 np0005592159 nova_compute[226433]: 2026-01-22 14:16:46.978 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:16:46 np0005592159 nova_compute[226433]: 2026-01-22 14:16:46.978 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:16:47 np0005592159 nova_compute[226433]: 2026-01-22 14:16:47.009 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:16:47 np0005592159 nova_compute[226433]: 2026-01-22 14:16:47.009 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:16:47 np0005592159 nova_compute[226433]: 2026-01-22 14:16:47.010 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:16:47 np0005592159 nova_compute[226433]: 2026-01-22 14:16:47.010 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:16:47 np0005592159 nova_compute[226433]: 2026-01-22 14:16:47.010 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:47 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:47.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:16:47.187 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:16:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:16:47.188 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:16:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:16:47.188 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:16:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:47.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:47 np0005592159 nova_compute[226433]: 2026-01-22 14:16:47.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:16:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:47.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:48 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:48.266+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:49 np0005592159 nova_compute[226433]: 2026-01-22 14:16:49.068 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:16:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:49.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:16:49 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:49.236+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:49.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:50.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:50 np0005592159 nova_compute[226433]: 2026-01-22 14:16:50.292 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:50 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:16:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:51.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:16:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:51.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:51 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:16:51 np0005592159 nova_compute[226433]: 2026-01-22 14:16:51.410 226437 DEBUG oslo_concurrency.processutils [None req-aeaaeb78-1155-4f77-81df-46e2a650d614 cfca93e323f848dba5ea3f5880bb9071 12769453a3af4b8eb7d8ff7daaaaa7ad - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:16:51 np0005592159 nova_compute[226433]: 2026-01-22 14:16:51.446 226437 DEBUG oslo_concurrency.processutils [None req-aeaaeb78-1155-4f77-81df-46e2a650d614 cfca93e323f848dba5ea3f5880bb9071 12769453a3af4b8eb7d8ff7daaaaa7ad - - default default] CMD "env LANG=C uptime" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:16:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:16:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:51.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:16:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:52.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:52 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:52 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2397 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:53.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:53.241+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:53 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:16:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:53.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:16:54 np0005592159 nova_compute[226433]: 2026-01-22 14:16:54.070 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:54.243+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:54 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:55.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:55.279+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:55 np0005592159 nova_compute[226433]: 2026-01-22 14:16:55.294 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:55 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:55.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:16:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:56.253+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:56 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:57.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:57.244+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:57 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:57 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2407 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:16:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:16:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:57.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:16:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:58.219+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:59 np0005592159 nova_compute[226433]: 2026-01-22 14:16:59.072 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:16:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:16:59.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:16:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:16:59.201+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:16:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:59 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:59 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:16:59 np0005592159 nova_compute[226433]: 2026-01-22 14:16:59.572 226437 DEBUG oslo_concurrency.lockutils [None req-46113aab-392c-4b18-81d5-e2b8818c573a 954d54358fc34858810c0e9b3866c2ad d066548ecdc24f11bb8d3b36c5301f7d - - default default] Acquiring lock "87e798e6-6f00-4fe1-8412-75ddc9e2878e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:16:59 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 22 09:16:59 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 22 09:16:59 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Jan 22 09:16:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:16:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:16:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:16:59.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:00.204+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:00 np0005592159 nova_compute[226433]: 2026-01-22 14:17:00.296 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:00 np0005592159 podman[241167]: 2026-01-22 14:17:00.984448461 +0000 UTC m=+0.049079205 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 09:17:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:17:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:01.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:17:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:01.244+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:01.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:02.285+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:02 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:03.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:03.248+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:03.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:04 np0005592159 nova_compute[226433]: 2026-01-22 14:17:04.073 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:04.217+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:04 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:05.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:05.249+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:17:05 np0005592159 nova_compute[226433]: 2026-01-22 14:17:05.296 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:05 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:05 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:05 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:05 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:05 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:17:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:05.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:17:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:06.234+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:06 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:17:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:07.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:07.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:07.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:08 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:08 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:08.215+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:09 np0005592159 nova_compute[226433]: 2026-01-22 14:17:09.076 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:17:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:09.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:17:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:09.174+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:17:09.328 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:17:09 np0005592159 nova_compute[226433]: 2026-01-22 14:17:09.328 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:17:09.329 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:17:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:17:09.330 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:17:09 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:09.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:10.210+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:10 np0005592159 nova_compute[226433]: 2026-01-22 14:17:10.298 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:10 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:11.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:11.237+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:11 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:11 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:17:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:11.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:17:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:12.197+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:12 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:12 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:13.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:13.171+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:13 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:13.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:14 np0005592159 nova_compute[226433]: 2026-01-22 14:17:14.079 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:14.211+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:14 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:15.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:15.213+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:15 np0005592159 nova_compute[226433]: 2026-01-22 14:17:15.299 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:15.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:16 np0005592159 podman[241243]: 2026-01-22 14:17:16.069261186 +0000 UTC m=+0.128293043 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:17:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:16.166+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:16 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:17.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:17.192+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:17 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:17 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:17.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:18.222+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:18 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:18 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:19 np0005592159 nova_compute[226433]: 2026-01-22 14:17:19.081 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:19.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:19.200+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:19 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:19.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:20.153+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:20 np0005592159 nova_compute[226433]: 2026-01-22 14:17:20.301 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:21 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:21.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:21.183+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:21.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:22.207+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:22 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:23.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:23.208+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:23.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:23 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:23 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:24 np0005592159 nova_compute[226433]: 2026-01-22 14:17:24.083 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:24.200+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:24 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:24 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:25.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:25.219+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:25 np0005592159 nova_compute[226433]: 2026-01-22 14:17:25.302 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:25.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:26.227+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:27 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:17:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:27.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:17:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:27.208+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:27.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:28 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:28 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:28.191+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:29 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:29 np0005592159 nova_compute[226433]: 2026-01-22 14:17:29.085 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:29.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:29.221+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:29.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:30.211+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:30 np0005592159 nova_compute[226433]: 2026-01-22 14:17:30.304 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:30 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:31.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:31.176+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:31 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:17:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:31.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:17:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:32 np0005592159 podman[241278]: 2026-01-22 14:17:32.010666418 +0000 UTC m=+0.062710201 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 09:17:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:32.220+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:32 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:32 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:33.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:33.224+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:33.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:34 np0005592159 nova_compute[226433]: 2026-01-22 14:17:34.087 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:34.193+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #79. Immutable memtables: 0.
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.310665) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 79
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454310701, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1246, "num_deletes": 251, "total_data_size": 2384903, "memory_usage": 2420368, "flush_reason": "Manual Compaction"}
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #80: started
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454439453, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 80, "file_size": 1568060, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41552, "largest_seqno": 42793, "table_properties": {"data_size": 1562724, "index_size": 2604, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13558, "raw_average_key_size": 21, "raw_value_size": 1551283, "raw_average_value_size": 2405, "num_data_blocks": 111, "num_entries": 645, "num_filter_entries": 645, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091372, "oldest_key_time": 1769091372, "file_creation_time": 1769091454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 128822 microseconds, and 4550 cpu microseconds.
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.439483) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #80: 1568060 bytes OK
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.439503) [db/memtable_list.cc:519] [default] Level-0 commit table #80 started
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.444651) [db/memtable_list.cc:722] [default] Level-0 commit table #80: memtable #1 done
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.444676) EVENT_LOG_v1 {"time_micros": 1769091454444668, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.444697) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 2378738, prev total WAL file size 2378738, number of live WAL files 2.
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000076.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.446243) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [80(1531KB)], [78(8310KB)]
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454446275, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [80], "files_L6": [78], "score": -1, "input_data_size": 10077528, "oldest_snapshot_seqno": -1}
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #81: 8314 keys, 8450274 bytes, temperature: kUnknown
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454539383, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 81, "file_size": 8450274, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8403151, "index_size": 25251, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20805, "raw_key_size": 222074, "raw_average_key_size": 26, "raw_value_size": 8259695, "raw_average_value_size": 993, "num_data_blocks": 963, "num_entries": 8314, "num_filter_entries": 8314, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091454, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 81, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.539606) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8450274 bytes
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.564833) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.2 rd, 90.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.1 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(11.8) write-amplify(5.4) OK, records in: 8831, records dropped: 517 output_compression: NoCompression
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.564882) EVENT_LOG_v1 {"time_micros": 1769091454564864, "job": 48, "event": "compaction_finished", "compaction_time_micros": 93175, "compaction_time_cpu_micros": 20875, "output_level": 6, "num_output_files": 1, "total_output_size": 8450274, "num_input_records": 8831, "num_output_records": 8314, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454565633, "job": 48, "event": "table_file_deletion", "file_number": 80}
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000078.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091454568210, "job": 48, "event": "table_file_deletion", "file_number": 78}
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.446163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.568342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.568350) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.568352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.568353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:17:34.568356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:17:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:35.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:35.181+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:35 np0005592159 nova_compute[226433]: 2026-01-22 14:17:35.307 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:35 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:35 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:35.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:36.148+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:36 np0005592159 nova_compute[226433]: 2026-01-22 14:17:36.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:36 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:17:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:37.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:17:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:37.184+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:37 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:37 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:37 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:37.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:38.232+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:39 np0005592159 nova_compute[226433]: 2026-01-22 14:17:39.125 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:39.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:39.216+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:39 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:39 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:39.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:40.182+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:40 np0005592159 nova_compute[226433]: 2026-01-22 14:17:40.344 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:40 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:41.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:41.185+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:41 np0005592159 nova_compute[226433]: 2026-01-22 14:17:41.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:41.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:42.178+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:42 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:42 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:42 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2447 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:42 np0005592159 nova_compute[226433]: 2026-01-22 14:17:42.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:42 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:43.136+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:43.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:43 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:43.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:44 np0005592159 nova_compute[226433]: 2026-01-22 14:17:44.127 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:44.154+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:44 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:45.112+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:45.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:45 np0005592159 nova_compute[226433]: 2026-01-22 14:17:45.346 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:45 np0005592159 nova_compute[226433]: 2026-01-22 14:17:45.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:45 np0005592159 nova_compute[226433]: 2026-01-22 14:17:45.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:45 np0005592159 nova_compute[226433]: 2026-01-22 14:17:45.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:17:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:45.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:45 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:46.136+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:46 np0005592159 nova_compute[226433]: 2026-01-22 14:17:46.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:46 np0005592159 nova_compute[226433]: 2026-01-22 14:17:46.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:17:46 np0005592159 nova_compute[226433]: 2026-01-22 14:17:46.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:17:46 np0005592159 nova_compute[226433]: 2026-01-22 14:17:46.589 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:17:46 np0005592159 nova_compute[226433]: 2026-01-22 14:17:46.590 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:17:46 np0005592159 nova_compute[226433]: 2026-01-22 14:17:46.590 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:17:46 np0005592159 nova_compute[226433]: 2026-01-22 14:17:46.591 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:17:46 np0005592159 nova_compute[226433]: 2026-01-22 14:17:46.592 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:46 np0005592159 nova_compute[226433]: 2026-01-22 14:17:46.592 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:46 np0005592159 nova_compute[226433]: 2026-01-22 14:17:46.655 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:17:46 np0005592159 nova_compute[226433]: 2026-01-22 14:17:46.655 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:17:46 np0005592159 nova_compute[226433]: 2026-01-22 14:17:46.655 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:17:46 np0005592159 nova_compute[226433]: 2026-01-22 14:17:46.656 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:17:46 np0005592159 nova_compute[226433]: 2026-01-22 14:17:46.656 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:17:47 np0005592159 podman[241367]: 2026-01-22 14:17:47.011484854 +0000 UTC m=+0.076824517 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 09:17:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:47.177+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:47.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:17:47.189 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:17:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:17:47.189 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:17:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:17:47.190 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:17:47 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:47 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:47 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2457 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:17:47 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/490131714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:17:47 np0005592159 nova_compute[226433]: 2026-01-22 14:17:47.322 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:17:47 np0005592159 nova_compute[226433]: 2026-01-22 14:17:47.485 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:17:47 np0005592159 nova_compute[226433]: 2026-01-22 14:17:47.487 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4796MB free_disk=20.896564483642578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:17:47 np0005592159 nova_compute[226433]: 2026-01-22 14:17:47.487 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:17:47 np0005592159 nova_compute[226433]: 2026-01-22 14:17:47.487 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:17:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:47.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:48 np0005592159 nova_compute[226433]: 2026-01-22 14:17:48.006 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:17:48 np0005592159 nova_compute[226433]: 2026-01-22 14:17:48.006 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:17:48 np0005592159 nova_compute[226433]: 2026-01-22 14:17:48.007 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:17:48 np0005592159 nova_compute[226433]: 2026-01-22 14:17:48.007 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:17:48 np0005592159 nova_compute[226433]: 2026-01-22 14:17:48.007 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:17:48 np0005592159 nova_compute[226433]: 2026-01-22 14:17:48.093 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:17:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:48.179+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:48 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:17:48 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1398042159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:17:49 np0005592159 nova_compute[226433]: 2026-01-22 14:17:49.013 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.921s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:17:49 np0005592159 nova_compute[226433]: 2026-01-22 14:17:49.023 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:17:49 np0005592159 nova_compute[226433]: 2026-01-22 14:17:49.043 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:17:49 np0005592159 nova_compute[226433]: 2026-01-22 14:17:49.066 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:17:49 np0005592159 nova_compute[226433]: 2026-01-22 14:17:49.067 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:17:49 np0005592159 nova_compute[226433]: 2026-01-22 14:17:49.128 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:49.185+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:17:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:49.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:17:49 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:49.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:50.207+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:50 np0005592159 nova_compute[226433]: 2026-01-22 14:17:50.348 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:51 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:51.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:51.256+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:51.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:51 np0005592159 nova_compute[226433]: 2026-01-22 14:17:51.992 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:17:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:52.210+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:52 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:52 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:53.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:53.203+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:53 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:17:53 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:17:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:17:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:17:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:17:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:53.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:54 np0005592159 nova_compute[226433]: 2026-01-22 14:17:54.129 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:54 np0005592159 podman[241774]: 2026-01-22 14:17:54.183233478 +0000 UTC m=+0.732156303 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Jan 22 09:17:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:54.190+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:54 np0005592159 podman[241774]: 2026-01-22 14:17:54.612351189 +0000 UTC m=+1.161274014 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:17:55 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:55 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:55.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:55.225+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:55 np0005592159 nova_compute[226433]: 2026-01-22 14:17:55.349 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:55.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:56.217+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:56 np0005592159 podman[241928]: 2026-01-22 14:17:56.233453069 +0000 UTC m=+0.651884145 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 09:17:56 np0005592159 podman[241950]: 2026-01-22 14:17:56.536529582 +0000 UTC m=+0.284111319 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 09:17:56 np0005592159 podman[241928]: 2026-01-22 14:17:56.565203095 +0000 UTC m=+0.983634161 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 09:17:56 np0005592159 podman[241996]: 2026-01-22 14:17:56.821544004 +0000 UTC m=+0.078704108 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793)
Jan 22 09:17:56 np0005592159 podman[242016]: 2026-01-22 14:17:56.889527655 +0000 UTC m=+0.051906914 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, release=1793, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, distribution-scope=public, description=keepalived for Ceph, io.buildah.version=1.28.2, name=keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 22 09:17:56 np0005592159 podman[241996]: 2026-01-22 14:17:56.914172981 +0000 UTC m=+0.171333055 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, build-date=2023-02-22T09:23:20, distribution-scope=public, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, description=keepalived for Ceph, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived)
Jan 22 09:17:57 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:57.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:57.255+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:57 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:17:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:57.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:58.297+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:58 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:58 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:59 np0005592159 nova_compute[226433]: 2026-01-22 14:17:59.132 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:17:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:17:59.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:17:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:17:59.255+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:17:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:17:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:17:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:17:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:17:59.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:00 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:18:00 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:18:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:00.291+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:18:00 np0005592159 nova_compute[226433]: 2026-01-22 14:18:00.352 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:01.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:01.299+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:18:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:18:01 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:18:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:18:01 np0005592159 nova_compute[226433]: 2026-01-22 14:18:01.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:01.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:02.269+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:18:02 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:18:02 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 2467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:03 np0005592159 podman[242163]: 2026-01-22 14:18:03.002371192 +0000 UTC m=+0.058832558 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 22 09:18:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:03.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:03.288+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:03.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:04 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:18:04 np0005592159 nova_compute[226433]: 2026-01-22 14:18:04.133 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:04.309+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:05.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:05.300+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:05 np0005592159 nova_compute[226433]: 2026-01-22 14:18:05.354 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:05 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:05.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:06.336+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:07.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:07.332+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:07 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:07 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:07 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:07.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:08.330+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:08 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:08 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:08 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:09 np0005592159 nova_compute[226433]: 2026-01-22 14:18:09.135 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:09.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:09.288+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:09 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:09.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:10.263+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:10 np0005592159 nova_compute[226433]: 2026-01-22 14:18:10.356 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:10 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:10 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:11.222+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:11.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #82. Immutable memtables: 0.
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.863243) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 82
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491863267, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 728, "num_deletes": 255, "total_data_size": 1208474, "memory_usage": 1230024, "flush_reason": "Manual Compaction"}
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #83: started
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491869603, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 83, "file_size": 784551, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42799, "largest_seqno": 43521, "table_properties": {"data_size": 780920, "index_size": 1411, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9036, "raw_average_key_size": 19, "raw_value_size": 773296, "raw_average_value_size": 1703, "num_data_blocks": 61, "num_entries": 454, "num_filter_entries": 454, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091454, "oldest_key_time": 1769091454, "file_creation_time": 1769091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 6448 microseconds, and 2653 cpu microseconds.
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.869675) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #83: 784551 bytes OK
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.869708) [db/memtable_list.cc:519] [default] Level-0 commit table #83 started
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.871767) [db/memtable_list.cc:722] [default] Level-0 commit table #83: memtable #1 done
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.871783) EVENT_LOG_v1 {"time_micros": 1769091491871778, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.871801) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1204419, prev total WAL file size 1204419, number of live WAL files 2.
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000079.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.872430) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353038' seq:72057594037927935, type:22 .. '6C6F676D0031373539' seq:0, type:0; will stop at (end)
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [83(766KB)], [81(8252KB)]
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491872539, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [83], "files_L6": [81], "score": -1, "input_data_size": 9234825, "oldest_snapshot_seqno": -1}
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #84: 8243 keys, 9067574 bytes, temperature: kUnknown
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491940135, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 84, "file_size": 9067574, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9020058, "index_size": 25836, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20613, "raw_key_size": 221857, "raw_average_key_size": 26, "raw_value_size": 8876929, "raw_average_value_size": 1076, "num_data_blocks": 985, "num_entries": 8243, "num_filter_entries": 8243, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 84, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.940504) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9067574 bytes
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.942702) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.5 rd, 134.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.1 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(23.3) write-amplify(11.6) OK, records in: 8768, records dropped: 525 output_compression: NoCompression
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.942717) EVENT_LOG_v1 {"time_micros": 1769091491942710, "job": 50, "event": "compaction_finished", "compaction_time_micros": 67674, "compaction_time_cpu_micros": 28682, "output_level": 6, "num_output_files": 1, "total_output_size": 9067574, "num_input_records": 8768, "num_output_records": 8243, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491942964, "job": 50, "event": "table_file_deletion", "file_number": 83}
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000081.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091491944415, "job": 50, "event": "table_file_deletion", "file_number": 81}
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.872288) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.944504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.944518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.944520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.944521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:18:11.944523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:18:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:11.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:12.229+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:12 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:18:12 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:13.199+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:13.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:13 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:13 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:13.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:14 np0005592159 nova_compute[226433]: 2026-01-22 14:18:14.137 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:14.236+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:14 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:14 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:15.226+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:15.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:15 np0005592159 nova_compute[226433]: 2026-01-22 14:18:15.359 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:15 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:15.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:16.249+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:16 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:17.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:17.254+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:17 np0005592159 ovn_controller[133156]: 2026-01-22T14:18:17Z|00044|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Jan 22 09:18:17 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:17.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:18 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:18 np0005592159 podman[242288]: 2026-01-22 14:18:18.017221535 +0000 UTC m=+0.074138941 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:18:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:18:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/605101687' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:18:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:18:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/605101687' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:18:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:18.249+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:19 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:19 np0005592159 nova_compute[226433]: 2026-01-22 14:18:19.139 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:19.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:19.283+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:19.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:20.247+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:20 np0005592159 nova_compute[226433]: 2026-01-22 14:18:20.362 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:20 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:21.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:21.264+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:21 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:18:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:21.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:18:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:22.239+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:22 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:22 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:22 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:23.249+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:23.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:23 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:23 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:23.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:24 np0005592159 nova_compute[226433]: 2026-01-22 14:18:24.181 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:24.290+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:24 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:25.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:25.323+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:25 np0005592159 nova_compute[226433]: 2026-01-22 14:18:25.366 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:25.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:26 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:26.343+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:27 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:27 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2497 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:27.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:27.296+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:27.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:28.330+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:28 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:29 np0005592159 nova_compute[226433]: 2026-01-22 14:18:29.184 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:29.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:29.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:29.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:30 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:30.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:30 np0005592159 nova_compute[226433]: 2026-01-22 14:18:30.368 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:31 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:31 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:31.274+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:31.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:31.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:32.299+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:32 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:32 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:33 np0005592159 podman[242346]: 2026-01-22 14:18:33.199976698 +0000 UTC m=+0.049070218 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true)
Jan 22 09:18:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:33.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:33.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:33 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:18:33 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 2502 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:34.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:34 np0005592159 nova_compute[226433]: 2026-01-22 14:18:34.187 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:34.225+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:34 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:34 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:35.238+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:35 np0005592159 nova_compute[226433]: 2026-01-22 14:18:35.372 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:35.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:35 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:36.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:36.254+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:36 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:37.251+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:37.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:37 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:37 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:38.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:38.240+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:38 np0005592159 nova_compute[226433]: 2026-01-22 14:18:38.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:39 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:39 np0005592159 nova_compute[226433]: 2026-01-22 14:18:39.189 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:39.240+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:39.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:40.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:40 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:40.226+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:40 np0005592159 nova_compute[226433]: 2026-01-22 14:18:40.374 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:41 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:41.188+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:41 np0005592159 nova_compute[226433]: 2026-01-22 14:18:41.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:41.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:42.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:42 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:42 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:42.204+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:42 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:43 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:43.206+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:43 np0005592159 nova_compute[226433]: 2026-01-22 14:18:43.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:43.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:44.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:44 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:44.178+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:44 np0005592159 nova_compute[226433]: 2026-01-22 14:18:44.191 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:45.152+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:45 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:45 np0005592159 nova_compute[226433]: 2026-01-22 14:18:45.376 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:45 np0005592159 nova_compute[226433]: 2026-01-22 14:18:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:45 np0005592159 nova_compute[226433]: 2026-01-22 14:18:45.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:18:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:45.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:46.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:46.122+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:46 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:46 np0005592159 nova_compute[226433]: 2026-01-22 14:18:46.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:47.091+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:18:47.190 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:18:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:18:47.191 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:18:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:18:47.191 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:18:47 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:47 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2517 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:47 np0005592159 nova_compute[226433]: 2026-01-22 14:18:47.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:47.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:48.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:48.050+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:48 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:48 np0005592159 nova_compute[226433]: 2026-01-22 14:18:48.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:48 np0005592159 nova_compute[226433]: 2026-01-22 14:18:48.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:18:48 np0005592159 nova_compute[226433]: 2026-01-22 14:18:48.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:18:48 np0005592159 nova_compute[226433]: 2026-01-22 14:18:48.536 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:18:48 np0005592159 nova_compute[226433]: 2026-01-22 14:18:48.536 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:18:48 np0005592159 nova_compute[226433]: 2026-01-22 14:18:48.536 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:18:48 np0005592159 nova_compute[226433]: 2026-01-22 14:18:48.536 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:18:48 np0005592159 nova_compute[226433]: 2026-01-22 14:18:48.537 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:48 np0005592159 nova_compute[226433]: 2026-01-22 14:18:48.563 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:18:48 np0005592159 nova_compute[226433]: 2026-01-22 14:18:48.564 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:18:48 np0005592159 nova_compute[226433]: 2026-01-22 14:18:48.564 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:18:48 np0005592159 nova_compute[226433]: 2026-01-22 14:18:48.564 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:18:48 np0005592159 nova_compute[226433]: 2026-01-22 14:18:48.565 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:18:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:18:48 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2271535365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.008 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:18:49 np0005592159 podman[242419]: 2026-01-22 14:18:49.054136814 +0000 UTC m=+0.109588074 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Jan 22 09:18:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:49.064+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.221 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.226 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.227 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4786MB free_disk=20.875835418701172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.227 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.228 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.301 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.301 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.302 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.302 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.302 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:18:49 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.362 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:18:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:49.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:18:49 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3014372553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.796 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.802 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.824 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.826 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:18:49 np0005592159 nova_compute[226433]: 2026-01-22 14:18:49.827 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:18:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:50.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:50.107+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:50 np0005592159 nova_compute[226433]: 2026-01-22 14:18:50.378 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:50 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:51.142+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:51 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:18:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:51.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:18:51 np0005592159 nova_compute[226433]: 2026-01-22 14:18:51.806 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:18:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:52.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:52.157+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:52 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:53.115+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:53 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:53 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2522 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:18:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:53.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:54.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:54.088+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:54 np0005592159 nova_compute[226433]: 2026-01-22 14:18:54.223 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:54 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:55.073+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:55 np0005592159 nova_compute[226433]: 2026-01-22 14:18:55.380 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:55 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:55.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:18:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:56.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:56.109+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:56 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:57.064+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:57 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:57.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:57 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:18:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:18:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:18:58.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:18:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:58.104+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:58 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:58 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:18:59.099+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:18:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:59 np0005592159 nova_compute[226433]: 2026-01-22 14:18:59.226 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:18:59 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:18:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:18:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:18:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:18:59.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:00.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:00.141+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:00 np0005592159 nova_compute[226433]: 2026-01-22 14:19:00.383 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:00 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:01.173+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:01 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:01.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:19:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:02.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:19:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:02.185+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:02 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2532 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:02 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:03.181+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:03 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:03.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:03 np0005592159 podman[242526]: 2026-01-22 14:19:03.988761566 +0000 UTC m=+0.052346527 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:19:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:04.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:04.212+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:04 np0005592159 nova_compute[226433]: 2026-01-22 14:19:04.230 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:04 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:05.185+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:05 np0005592159 nova_compute[226433]: 2026-01-22 14:19:05.386 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 09:19:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:05.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 09:19:05 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:06.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:06.141+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:06 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:07.155+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:07.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:07 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:07 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:07 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:08.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:08.139+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:08 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:09.147+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:09 np0005592159 nova_compute[226433]: 2026-01-22 14:19:09.232 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:19:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:09.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:19:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:10.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:10.186+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:10 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:10 np0005592159 nova_compute[226433]: 2026-01-22 14:19:10.388 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:11 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:11.197+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:11.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:12.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:12.199+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:12 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:12 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:13.194+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:13 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:13 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:19:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:19:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:19:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:13.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:14.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:14.178+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:14 np0005592159 nova_compute[226433]: 2026-01-22 14:19:14.260 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:14 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:15.178+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:15 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:15 np0005592159 nova_compute[226433]: 2026-01-22 14:19:15.390 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:15.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:16.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:16.134+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:16 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:17.184+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:17 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:17.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:17 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:18.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:19:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/632071219' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:19:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:19:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/632071219' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:19:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:18.233+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:18 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:19 np0005592159 nova_compute[226433]: 2026-01-22 14:19:19.261 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:19.278+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:19 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:19:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:19:19 np0005592159 podman[242759]: 2026-01-22 14:19:19.652253922 +0000 UTC m=+0.089792552 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 09:19:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:19.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:20.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:20.269+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:20 np0005592159 nova_compute[226433]: 2026-01-22 14:19:20.391 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:20 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:21.235+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:21.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:22 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:22.215+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:22 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:23 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:23 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:23 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:23.233+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:23.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:24.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:24 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:24.192+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:24 np0005592159 nova_compute[226433]: 2026-01-22 14:19:24.262 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:25 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:25.204+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:25 np0005592159 nova_compute[226433]: 2026-01-22 14:19:25.394 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:25.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:26.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:26 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:26.223+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:27 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:27 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:27.204+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:27.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:28.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:28 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:28.192+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:29 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:29.208+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:29 np0005592159 nova_compute[226433]: 2026-01-22 14:19:29.265 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:29.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:30.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:30.166+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:30 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:30 np0005592159 nova_compute[226433]: 2026-01-22 14:19:30.397 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:31.126+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:31 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:31.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:32.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:32.111+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:33 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:33.159+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:33.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:34.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:34 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:34 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:34 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:34.138+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:34 np0005592159 nova_compute[226433]: 2026-01-22 14:19:34.265 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:35 np0005592159 podman[242870]: 2026-01-22 14:19:35.039820296 +0000 UTC m=+0.087914781 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 22 09:19:35 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:35.168+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:35 np0005592159 nova_compute[226433]: 2026-01-22 14:19:35.400 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:35.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:36.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:36 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:36.210+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:37 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:37.189+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:37.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:38.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:38 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:38.171+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:38 np0005592159 nova_compute[226433]: 2026-01-22 14:19:38.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:39.162+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:39 np0005592159 nova_compute[226433]: 2026-01-22 14:19:39.270 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:39 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:19:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:19:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:40.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:40.194+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:40 np0005592159 nova_compute[226433]: 2026-01-22 14:19:40.402 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:40 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:41.200+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:41 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:42.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:42.217+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:42 np0005592159 nova_compute[226433]: 2026-01-22 14:19:42.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:42 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:42 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:43.177+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:43 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:19:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:43.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:19:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:44.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:44.209+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:44 np0005592159 nova_compute[226433]: 2026-01-22 14:19:44.270 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:44 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:45.183+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:45 np0005592159 nova_compute[226433]: 2026-01-22 14:19:45.445 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:45 np0005592159 nova_compute[226433]: 2026-01-22 14:19:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:45 np0005592159 nova_compute[226433]: 2026-01-22 14:19:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:45 np0005592159 nova_compute[226433]: 2026-01-22 14:19:45.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:19:45 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:45.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:46.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:46.141+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:46 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:47.188+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:19:47.192 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:19:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:19:47.192 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:19:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:19:47.193 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:19:47 np0005592159 nova_compute[226433]: 2026-01-22 14:19:47.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:47 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:47 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:47.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:48.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:48.184+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:48 np0005592159 nova_compute[226433]: 2026-01-22 14:19:48.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:48 np0005592159 nova_compute[226433]: 2026-01-22 14:19:48.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:19:48 np0005592159 nova_compute[226433]: 2026-01-22 14:19:48.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:19:48 np0005592159 nova_compute[226433]: 2026-01-22 14:19:48.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:19:48 np0005592159 nova_compute[226433]: 2026-01-22 14:19:48.541 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:19:48 np0005592159 nova_compute[226433]: 2026-01-22 14:19:48.542 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:19:48 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:19:49 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/578781724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.022 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:19:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:49.222+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.231 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.232 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4774MB free_disk=20.875835418701172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.232 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.233 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.272 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.319 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.320 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.320 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.320 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.321 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.387 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:19:49 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:49 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:19:49 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1661653774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.817 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.823 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:19:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:49.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.849 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.851 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:19:49 np0005592159 nova_compute[226433]: 2026-01-22 14:19:49.852 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:19:50 np0005592159 podman[242940]: 2026-01-22 14:19:50.017633468 +0000 UTC m=+0.075666533 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 22 09:19:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:50.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:50.235+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:50 np0005592159 nova_compute[226433]: 2026-01-22 14:19:50.446 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:50 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:50 np0005592159 nova_compute[226433]: 2026-01-22 14:19:50.852 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:50 np0005592159 nova_compute[226433]: 2026-01-22 14:19:50.853 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:19:50 np0005592159 nova_compute[226433]: 2026-01-22 14:19:50.853 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:19:50 np0005592159 nova_compute[226433]: 2026-01-22 14:19:50.910 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:19:50 np0005592159 nova_compute[226433]: 2026-01-22 14:19:50.910 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:19:50 np0005592159 nova_compute[226433]: 2026-01-22 14:19:50.910 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:19:50 np0005592159 nova_compute[226433]: 2026-01-22 14:19:50.910 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:19:50 np0005592159 nova_compute[226433]: 2026-01-22 14:19:50.911 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:51.221+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:51 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:51.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:52.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:52.210+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:52 np0005592159 nova_compute[226433]: 2026-01-22 14:19:52.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:19:52 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:52 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:53.241+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:53 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:53.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:54.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:54.251+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:54 np0005592159 nova_compute[226433]: 2026-01-22 14:19:54.275 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:54 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:55.229+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:55 np0005592159 nova_compute[226433]: 2026-01-22 14:19:55.506 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:55 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:19:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:55.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:19:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:56.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:56.249+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:56 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:57.230+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:57.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:19:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:19:58.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:19:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:58.204+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:58 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:19:58 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:19:59.164+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:19:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:59 np0005592159 nova_compute[226433]: 2026-01-22 14:19:59.277 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:19:59 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:19:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:19:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:19:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:19:59.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:20:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:00.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:20:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:00.183+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:00 np0005592159 nova_compute[226433]: 2026-01-22 14:20:00.510 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:00 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:00 np0005592159 ceph-mon[77081]: Health detail: HEALTH_WARN 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops
Jan 22 09:20:00 np0005592159 ceph-mon[77081]: [WRN] SLOW_OPS: 22 slow ops, oldest one blocked for 2587 sec, osd.2 has slow ops
Jan 22 09:20:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:01.163+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:01 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:01 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:01.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:02.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:02.188+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:02 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:02 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:03.231+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:03 np0005592159 nova_compute[226433]: 2026-01-22 14:20:03.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:03.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:04 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:04.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:04.239+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:04 np0005592159 nova_compute[226433]: 2026-01-22 14:20:04.279 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:05 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:05.222+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:05 np0005592159 nova_compute[226433]: 2026-01-22 14:20:05.513 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:05.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:05 np0005592159 podman[243026]: 2026-01-22 14:20:05.976243145 +0000 UTC m=+0.043040867 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:20:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:06.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:06.220+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:06 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:07.242+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:07.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:07 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:07 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:08.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:08.280+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:09.243+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:09 np0005592159 nova_compute[226433]: 2026-01-22 14:20:09.281 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:09 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:09 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:09.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:10.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:10.253+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:10 np0005592159 nova_compute[226433]: 2026-01-22 14:20:10.515 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:10 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:11.297+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:11.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:12.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:12.332+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:12 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:12 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:13.316+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:13 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:13.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:20:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:14.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:20:14 np0005592159 nova_compute[226433]: 2026-01-22 14:20:14.284 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:14.323+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:14 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:15.285+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:15 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:15 np0005592159 nova_compute[226433]: 2026-01-22 14:20:15.517 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:15.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:16.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:16.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:16 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:17.260+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:17 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:17 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:17.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:18.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:18.290+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:18 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:19 np0005592159 nova_compute[226433]: 2026-01-22 14:20:19.287 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:19.335+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:19 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:19.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:20.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:20.339+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:20 np0005592159 nova_compute[226433]: 2026-01-22 14:20:20.519 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:20 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:21 np0005592159 podman[243238]: 2026-01-22 14:20:21.020926278 +0000 UTC m=+0.084924593 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 22 09:20:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:21.337+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:21 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:20:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:20:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:20:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:20:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:21.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:20:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:20:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:22.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:20:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:22.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:22 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:23.327+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:23 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:23 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:23.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:20:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:24.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:20:24 np0005592159 nova_compute[226433]: 2026-01-22 14:20:24.289 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:24.314+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:24 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:25.331+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:25 np0005592159 nova_compute[226433]: 2026-01-22 14:20:25.521 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:25 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:25.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:26.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:26.340+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:26 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:27.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:27.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #85. Immutable memtables: 0.
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.915481) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 85
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627915536, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 2007, "num_deletes": 251, "total_data_size": 3802973, "memory_usage": 3860896, "flush_reason": "Manual Compaction"}
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #86: started
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627933649, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 86, "file_size": 2486047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43526, "largest_seqno": 45528, "table_properties": {"data_size": 2478446, "index_size": 4159, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19604, "raw_average_key_size": 21, "raw_value_size": 2461643, "raw_average_value_size": 2664, "num_data_blocks": 180, "num_entries": 924, "num_filter_entries": 924, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091492, "oldest_key_time": 1769091492, "file_creation_time": 1769091627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 18237 microseconds, and 7175 cpu microseconds.
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.933711) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #86: 2486047 bytes OK
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.933743) [db/memtable_list.cc:519] [default] Level-0 commit table #86 started
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.936817) [db/memtable_list.cc:722] [default] Level-0 commit table #86: memtable #1 done
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.936841) EVENT_LOG_v1 {"time_micros": 1769091627936835, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.936862) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 3793713, prev total WAL file size 3809451, number of live WAL files 2.
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000082.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.937999) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [86(2427KB)], [84(8855KB)]
Jan 22 09:20:27 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091627938050, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [86], "files_L6": [84], "score": -1, "input_data_size": 11553621, "oldest_snapshot_seqno": -1}
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #87: 8652 keys, 9903653 bytes, temperature: kUnknown
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628012143, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 87, "file_size": 9903653, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9853112, "index_size": 27837, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21637, "raw_key_size": 231999, "raw_average_key_size": 26, "raw_value_size": 9702291, "raw_average_value_size": 1121, "num_data_blocks": 1064, "num_entries": 8652, "num_filter_entries": 8652, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 87, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.012552) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 9903653 bytes
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.014192) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 155.7 rd, 133.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 8.6 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(8.6) write-amplify(4.0) OK, records in: 9167, records dropped: 515 output_compression: NoCompression
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.014223) EVENT_LOG_v1 {"time_micros": 1769091628014208, "job": 52, "event": "compaction_finished", "compaction_time_micros": 74198, "compaction_time_cpu_micros": 23253, "output_level": 6, "num_output_files": 1, "total_output_size": 9903653, "num_input_records": 9167, "num_output_records": 8652, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628015356, "job": 52, "event": "table_file_deletion", "file_number": 86}
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000084.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091628018564, "job": 52, "event": "table_file_deletion", "file_number": 84}
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:27.937907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.018630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.018638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.018639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.018641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:20:28.018642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:28.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:28.324+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:28 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:29 np0005592159 nova_compute[226433]: 2026-01-22 14:20:29.291 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:29.370+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:29.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:30.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:30.373+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:30 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:30 np0005592159 nova_compute[226433]: 2026-01-22 14:20:30.524 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:31 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:31.359+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:31.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:32.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:32 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:32 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2617 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:32.377+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:33 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:33.331+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:33.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:34.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:34.306+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:34 np0005592159 nova_compute[226433]: 2026-01-22 14:20:34.324 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:34 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:35.335+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:35 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:35 np0005592159 nova_compute[226433]: 2026-01-22 14:20:35.567 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:35.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:36.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:36 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:36.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:36 np0005592159 podman[243373]: 2026-01-22 14:20:36.986587124 +0000 UTC m=+0.049171975 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 09:20:37 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:37 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:37.414+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:37.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:20:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:38.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:20:38 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:38.464+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:38 np0005592159 nova_compute[226433]: 2026-01-22 14:20:38.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:39 np0005592159 nova_compute[226433]: 2026-01-22 14:20:39.328 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:39 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:39.443+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:39.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:40.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:40.419+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:40 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:40 np0005592159 nova_compute[226433]: 2026-01-22 14:20:40.570 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:41.389+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:41 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:41.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:42.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:42.368+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:42 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:42 np0005592159 nova_compute[226433]: 2026-01-22 14:20:42.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:43.326+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:43 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:43 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:43.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:20:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:44.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:20:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:44.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:44 np0005592159 nova_compute[226433]: 2026-01-22 14:20:44.327 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:44 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:45.250+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:45 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:45 np0005592159 nova_compute[226433]: 2026-01-22 14:20:45.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:45 np0005592159 nova_compute[226433]: 2026-01-22 14:20:45.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:20:45 np0005592159 nova_compute[226433]: 2026-01-22 14:20:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:45 np0005592159 nova_compute[226433]: 2026-01-22 14:20:45.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 09:20:45 np0005592159 nova_compute[226433]: 2026-01-22 14:20:45.573 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:45 np0005592159 nova_compute[226433]: 2026-01-22 14:20:45.609 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 09:20:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:45.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:46.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:46.220+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:20:47.194 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:20:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:20:47.194 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:20:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:20:47.194 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:20:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:47.207+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:47 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:47 np0005592159 nova_compute[226433]: 2026-01-22 14:20:47.605 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:47 np0005592159 nova_compute[226433]: 2026-01-22 14:20:47.606 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:47.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:48.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:48.220+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:48 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:48 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:49.218+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:49 np0005592159 nova_compute[226433]: 2026-01-22 14:20:49.328 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:49 np0005592159 nova_compute[226433]: 2026-01-22 14:20:49.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:49 np0005592159 nova_compute[226433]: 2026-01-22 14:20:49.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:20:49 np0005592159 nova_compute[226433]: 2026-01-22 14:20:49.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:20:49 np0005592159 nova_compute[226433]: 2026-01-22 14:20:49.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:20:49 np0005592159 nova_compute[226433]: 2026-01-22 14:20:49.541 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:20:49 np0005592159 nova_compute[226433]: 2026-01-22 14:20:49.542 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:20:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:49.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:20:49 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3542653799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:20:49 np0005592159 nova_compute[226433]: 2026-01-22 14:20:49.967 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:20:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:50.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:50 np0005592159 nova_compute[226433]: 2026-01-22 14:20:50.161 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:20:50 np0005592159 nova_compute[226433]: 2026-01-22 14:20:50.162 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4768MB free_disk=20.875835418701172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:20:50 np0005592159 nova_compute[226433]: 2026-01-22 14:20:50.162 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:20:50 np0005592159 nova_compute[226433]: 2026-01-22 14:20:50.162 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:20:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:50.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:50 np0005592159 nova_compute[226433]: 2026-01-22 14:20:50.312 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:20:50 np0005592159 nova_compute[226433]: 2026-01-22 14:20:50.313 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:20:50 np0005592159 nova_compute[226433]: 2026-01-22 14:20:50.313 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:20:50 np0005592159 nova_compute[226433]: 2026-01-22 14:20:50.313 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:20:50 np0005592159 nova_compute[226433]: 2026-01-22 14:20:50.313 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=20GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:20:50 np0005592159 nova_compute[226433]: 2026-01-22 14:20:50.507 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:20:50 np0005592159 nova_compute[226433]: 2026-01-22 14:20:50.575 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:50 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:20:50 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/76593589' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:20:50 np0005592159 nova_compute[226433]: 2026-01-22 14:20:50.926 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:20:50 np0005592159 nova_compute[226433]: 2026-01-22 14:20:50.932 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:20:51 np0005592159 nova_compute[226433]: 2026-01-22 14:20:51.045 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:20:51 np0005592159 nova_compute[226433]: 2026-01-22 14:20:51.047 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:20:51 np0005592159 nova_compute[226433]: 2026-01-22 14:20:51.047 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:20:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:51.291+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:51 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:51 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:51 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:51.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:52 np0005592159 podman[243443]: 2026-01-22 14:20:52.011121816 +0000 UTC m=+0.074778897 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 09:20:52 np0005592159 nova_compute[226433]: 2026-01-22 14:20:52.048 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:52 np0005592159 nova_compute[226433]: 2026-01-22 14:20:52.048 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:20:52 np0005592159 nova_compute[226433]: 2026-01-22 14:20:52.048 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:20:52 np0005592159 nova_compute[226433]: 2026-01-22 14:20:52.077 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:20:52 np0005592159 nova_compute[226433]: 2026-01-22 14:20:52.077 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:20:52 np0005592159 nova_compute[226433]: 2026-01-22 14:20:52.078 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:20:52 np0005592159 nova_compute[226433]: 2026-01-22 14:20:52.078 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:20:52 np0005592159 nova_compute[226433]: 2026-01-22 14:20:52.079 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:52.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:52.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:52 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2637 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:52 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:53.285+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:53 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:53.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:54.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:54 np0005592159 nova_compute[226433]: 2026-01-22 14:20:54.330 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:54.333+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:54 np0005592159 nova_compute[226433]: 2026-01-22 14:20:54.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:54 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:55.299+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:55 np0005592159 nova_compute[226433]: 2026-01-22 14:20:55.578 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:55 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:55.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:20:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:56.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:20:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:56.334+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:56 np0005592159 nova_compute[226433]: 2026-01-22 14:20:56.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:20:56 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:57.345+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:57.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:57 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2647 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:20:57 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:20:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:20:58.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:58.361+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:58 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:59 np0005592159 nova_compute[226433]: 2026-01-22 14:20:59.332 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:20:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:20:59.399+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:20:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:20:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:20:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:20:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:20:59.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:20:59 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:00.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:00.435+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:00 np0005592159 nova_compute[226433]: 2026-01-22 14:21:00.580 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:01 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:01.473+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:01 np0005592159 nova_compute[226433]: 2026-01-22 14:21:01.573 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:01 np0005592159 nova_compute[226433]: 2026-01-22 14:21:01.574 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 09:21:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:01.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:02.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:02.519+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:02 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:03.557+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:03 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2652 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:03 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:03.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:04.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:04 np0005592159 nova_compute[226433]: 2026-01-22 14:21:04.335 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:04.597+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:04 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:04 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:05.579+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:05 np0005592159 nova_compute[226433]: 2026-01-22 14:21:05.581 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:05.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:06 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:06.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:06.628+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:07 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:07.581+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:07.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:07 np0005592159 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 09:21:07 np0005592159 podman[243529]: 2026-01-22 14:21:07.994214003 +0000 UTC m=+0.052626299 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:21:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:08.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:08 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:08 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:08.532+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:09 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:09 np0005592159 nova_compute[226433]: 2026-01-22 14:21:09.337 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:09.515+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:09.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:10.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:10 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:10.539+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:10 np0005592159 nova_compute[226433]: 2026-01-22 14:21:10.584 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:11 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:11.500+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:11.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:12.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:12 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:12.498+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:13 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:13 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:13.536+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:13.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:14.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:14 np0005592159 nova_compute[226433]: 2026-01-22 14:21:14.338 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:14 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:14.488+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:15.445+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:15 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:15 np0005592159 nova_compute[226433]: 2026-01-22 14:21:15.586 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:15.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:16.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:16.397+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:16 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:17.353+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:17 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:17.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:18.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:18.307+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:18 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:19.348+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:19 np0005592159 nova_compute[226433]: 2026-01-22 14:21:19.358 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:19 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:19.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:20.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:20.321+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:20 np0005592159 nova_compute[226433]: 2026-01-22 14:21:20.589 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:21 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:21.305+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:21:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:21.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:21:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:22.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:22.258+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:22 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:22 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:23 np0005592159 podman[243608]: 2026-01-22 14:21:23.040292728 +0000 UTC m=+0.074684424 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:21:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:23.247+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:23 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:23 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:23.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:24.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:24.290+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:24 np0005592159 nova_compute[226433]: 2026-01-22 14:21:24.360 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:24 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:25.301+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:25 np0005592159 nova_compute[226433]: 2026-01-22 14:21:25.592 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:25 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:25.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:26.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:26.291+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:26 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:27.267+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:27 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:27 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:27.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:28.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:28.238+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:29.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:29 np0005592159 nova_compute[226433]: 2026-01-22 14:21:29.362 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:29 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:21:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:21:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:21:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:21:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:29.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:30.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:30 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:30 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:30.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:30 np0005592159 nova_compute[226433]: 2026-01-22 14:21:30.595 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:31.265+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:31 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:31.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:32.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:32 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:32.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:33.286+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:33 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:33 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:33.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:34.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:34.328+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:34 np0005592159 nova_compute[226433]: 2026-01-22 14:21:34.402 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:35 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:21:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:21:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:35.297+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:35 np0005592159 nova_compute[226433]: 2026-01-22 14:21:35.631 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:21:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:21:36 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:36 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:36.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:36.252+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:37 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:37.241+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:37.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:38 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:38.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:38.227+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:39 np0005592159 podman[243873]: 2026-01-22 14:21:39.029638456 +0000 UTC m=+0.085806174 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:21:39 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:39.224+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:39 np0005592159 nova_compute[226433]: 2026-01-22 14:21:39.441 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:39.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:40 np0005592159 nova_compute[226433]: 2026-01-22 14:21:40.097 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:40 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:40.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:40.265+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:40 np0005592159 nova_compute[226433]: 2026-01-22 14:21:40.633 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:40 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:21:40.698 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:21:40 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:21:40.700 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:21:40 np0005592159 nova_compute[226433]: 2026-01-22 14:21:40.699 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:41 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:41.240+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:41.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:42.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:42.231+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:42 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:43.205+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:43 np0005592159 nova_compute[226433]: 2026-01-22 14:21:43.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:43 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:43 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:43.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:44.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:44.233+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:44 np0005592159 nova_compute[226433]: 2026-01-22 14:21:44.443 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:44 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:45.238+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:45 np0005592159 nova_compute[226433]: 2026-01-22 14:21:45.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:45 np0005592159 nova_compute[226433]: 2026-01-22 14:21:45.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:21:45 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:45 np0005592159 nova_compute[226433]: 2026-01-22 14:21:45.635 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:45.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:46.194+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 22 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:46.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:46 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:46 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:21:46.701 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:21:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:47.189+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:21:47.195 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:21:47.195 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:21:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:21:47.195 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:21:47 np0005592159 nova_compute[226433]: 2026-01-22 14:21:47.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:47 np0005592159 nova_compute[226433]: 2026-01-22 14:21:47.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:47 np0005592159 ceph-mon[77081]: 22 slow requests (by type [ 'delayed' : 22 ] most affected pool [ 'vms' : 18 ])
Jan 22 09:21:47 np0005592159 ceph-mon[77081]: Health check update: 22 slow ops, oldest one blocked for 2697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:47.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:48.155+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:48.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:48 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:49.154+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:49 np0005592159 nova_compute[226433]: 2026-01-22 14:21:49.444 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:49 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:49.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:50.193+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.206 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "8e98e700-52a4-44ff-8e11-9404cd11d871" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:50.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.207 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.224 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.305 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.306 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.314 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.315 226437 INFO nova.compute.claims [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Claim successful on node compute-2.ctlplane.example.com#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.445 226437 DEBUG nova.scheduler.client.report [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Refreshing inventories for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.467 226437 DEBUG nova.scheduler.client.report [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Updating ProviderTree inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.468 226437 DEBUG nova.compute.provider_tree [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.482 226437 DEBUG nova.scheduler.client.report [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Refreshing aggregate associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.503 226437 DEBUG nova.scheduler.client.report [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Refreshing trait associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.515 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.545 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.545 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.571 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.596 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:21:50 np0005592159 nova_compute[226433]: 2026-01-22 14:21:50.637 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:50 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:21:51 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1686993375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.024 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.029 226437 DEBUG nova.compute.provider_tree [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.049 226437 DEBUG nova.scheduler.client.report [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.070 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.070 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.073 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.073 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.074 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.074 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.141 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.141 226437 DEBUG nova.network.neutron [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.165 226437 INFO nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:21:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:51.175+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.185 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.285 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.286 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.287 226437 INFO nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Creating image(s)#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.326 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.363 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.397 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.402 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.462 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.463 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.464 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.464 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:21:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:21:51 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1271797265' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.493 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.496 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 8e98e700-52a4-44ff-8e11-9404cd11d871_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.512 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.573 226437 DEBUG nova.network.neutron [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.574 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.675 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.676 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4744MB free_disk=20.875835418701172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.677 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.677 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:21:51 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.880 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 8e98e700-52a4-44ff-8e11-9404cd11d871_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.384s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:21:51 np0005592159 nova_compute[226433]: 2026-01-22 14:21:51.948 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] resizing rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 22 09:21:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:51.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.052 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.052 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.052 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.052 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.053 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.053 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=20GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.060 226437 DEBUG nova.objects.instance [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lazy-loading 'migration_context' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.073 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.073 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Ensure instance console log exists: /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.074 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.074 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.074 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.076 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.079 226437 WARNING nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.087 226437 DEBUG nova.virt.libvirt.host [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.088 226437 DEBUG nova.virt.libvirt.host [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.096 226437 DEBUG nova.virt.libvirt.host [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.097 226437 DEBUG nova.virt.libvirt.host [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.098 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.098 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.099 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.099 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.099 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.100 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.100 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.100 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.100 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.101 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.101 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.101 226437 DEBUG nova.virt.hardware [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.104 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:21:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:52.167+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.199 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:21:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:52.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:21:52 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/910785072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.553 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.586 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.592 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:21:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:21:52 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2844275053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.621 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.626 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.647 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.680 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:21:52 np0005592159 nova_compute[226433]: 2026-01-22 14:21:52.681 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:21:52 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:52 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:21:53 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2190805920' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:21:53 np0005592159 nova_compute[226433]: 2026-01-22 14:21:53.101 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:21:53 np0005592159 nova_compute[226433]: 2026-01-22 14:21:53.102 226437 DEBUG nova.objects.instance [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:21:53 np0005592159 nova_compute[226433]: 2026-01-22 14:21:53.122 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] End _get_guest_xml xml=<domain type="kvm">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  <uuid>8e98e700-52a4-44ff-8e11-9404cd11d871</uuid>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  <name>instance-0000000d</name>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  <memory>131072</memory>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  <vcpu>1</vcpu>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  <metadata>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <nova:name>tempest-ServersOnMultiNodesTest-server-63037555</nova:name>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <nova:creationTime>2026-01-22 14:21:52</nova:creationTime>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <nova:flavor name="m1.nano">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <nova:memory>128</nova:memory>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <nova:disk>1</nova:disk>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <nova:swap>0</nova:swap>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <nova:vcpus>1</nova:vcpus>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      </nova:flavor>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <nova:owner>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <nova:user uuid="a5be1e8103e142238ae4c912393095c4">tempest-ServersOnMultiNodesTest-59245381-project-member</nova:user>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <nova:project uuid="688eff2d52114848b8ae16c9cfaa49d9">tempest-ServersOnMultiNodesTest-59245381</nova:project>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      </nova:owner>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <nova:ports/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    </nova:instance>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  </metadata>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  <sysinfo type="smbios">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <system>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <entry name="manufacturer">RDO</entry>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <entry name="product">OpenStack Compute</entry>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <entry name="serial">8e98e700-52a4-44ff-8e11-9404cd11d871</entry>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <entry name="uuid">8e98e700-52a4-44ff-8e11-9404cd11d871</entry>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <entry name="family">Virtual Machine</entry>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    </system>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  </sysinfo>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  <os>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <boot dev="hd"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <smbios mode="sysinfo"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  </os>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  <features>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <acpi/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <apic/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <vmcoreinfo/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  </features>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  <clock offset="utc">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <timer name="hpet" present="no"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  </clock>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  <cpu mode="custom" match="exact">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <model>Nehalem</model>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  </cpu>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  <devices>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <disk type="network" device="disk">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <driver type="raw" cache="none"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <source protocol="rbd" name="vms/8e98e700-52a4-44ff-8e11-9404cd11d871_disk">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      </source>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <auth username="openstack">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      </auth>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <target dev="vda" bus="virtio"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    </disk>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <disk type="network" device="cdrom">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <driver type="raw" cache="none"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <source protocol="rbd" name="vms/8e98e700-52a4-44ff-8e11-9404cd11d871_disk.config">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      </source>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <auth username="openstack">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      </auth>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <target dev="sda" bus="sata"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    </disk>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <serial type="pty">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <log file="/var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/console.log" append="off"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    </serial>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <video>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <model type="virtio"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    </video>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <input type="tablet" bus="usb"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <rng model="virtio">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <backend model="random">/dev/urandom</backend>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    </rng>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <controller type="usb" index="0"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    <memballoon model="virtio">
Jan 22 09:21:53 np0005592159 nova_compute[226433]:      <stats period="10"/>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:    </memballoon>
Jan 22 09:21:53 np0005592159 nova_compute[226433]:  </devices>
Jan 22 09:21:53 np0005592159 nova_compute[226433]: </domain>
Jan 22 09:21:53 np0005592159 nova_compute[226433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 09:21:53 np0005592159 nova_compute[226433]: 2026-01-22 14:21:53.178 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:21:53 np0005592159 nova_compute[226433]: 2026-01-22 14:21:53.178 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:21:53 np0005592159 nova_compute[226433]: 2026-01-22 14:21:53.179 226437 INFO nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Using config drive#033[00m
Jan 22 09:21:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:53.184+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:53 np0005592159 nova_compute[226433]: 2026-01-22 14:21:53.203 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:21:53 np0005592159 nova_compute[226433]: 2026-01-22 14:21:53.360 226437 INFO nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Creating config drive at /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/disk.config#033[00m
Jan 22 09:21:53 np0005592159 nova_compute[226433]: 2026-01-22 14:21:53.364 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5lnqu80d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:21:53 np0005592159 nova_compute[226433]: 2026-01-22 14:21:53.489 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5lnqu80d" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:21:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:53 np0005592159 nova_compute[226433]: 2026-01-22 14:21:53.530 226437 DEBUG nova.storage.rbd_utils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] rbd image 8e98e700-52a4-44ff-8e11-9404cd11d871_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:21:53 np0005592159 nova_compute[226433]: 2026-01-22 14:21:53.534 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/disk.config 8e98e700-52a4-44ff-8e11-9404cd11d871_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:21:53 np0005592159 nova_compute[226433]: 2026-01-22 14:21:53.706 226437 DEBUG oslo_concurrency.processutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/disk.config 8e98e700-52a4-44ff-8e11-9404cd11d871_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:21:53 np0005592159 nova_compute[226433]: 2026-01-22 14:21:53.707 226437 INFO nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Deleting local config drive /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871/disk.config because it was imported into RBD.#033[00m
Jan 22 09:21:53 np0005592159 systemd[1]: Starting libvirt secret daemon...
Jan 22 09:21:53 np0005592159 systemd[1]: Started libvirt secret daemon.
Jan 22 09:21:53 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 2702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:21:53 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:53 np0005592159 systemd-machined[194970]: New machine qemu-3-instance-0000000d.
Jan 22 09:21:53 np0005592159 systemd[1]: Started Virtual Machine qemu-3-instance-0000000d.
Jan 22 09:21:53 np0005592159 podman[244251]: 2026-01-22 14:21:53.880666044 +0000 UTC m=+0.122748111 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:21:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:53.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:54.171+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:54.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.402 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091714.4021378, 8e98e700-52a4-44ff-8e11-9404cd11d871 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.404 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] VM Resumed (Lifecycle Event)#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.406 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.407 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.411 226437 INFO nova.virt.libvirt.driver [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance spawned successfully.#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.411 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.427 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.433 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.437 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.438 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.438 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.439 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.439 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.440 226437 DEBUG nova.virt.libvirt.driver [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.447 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.466 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.466 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769091714.4036539, 8e98e700-52a4-44ff-8e11-9404cd11d871 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.467 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] VM Started (Lifecycle Event)#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.487 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.491 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.495 226437 INFO nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Took 3.21 seconds to spawn the instance on the hypervisor.#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.495 226437 DEBUG nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.517 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.549 226437 INFO nova.compute.manager [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Took 4.28 seconds to build instance.#033[00m
Jan 22 09:21:54 np0005592159 nova_compute[226433]: 2026-01-22 14:21:54.562 226437 DEBUG oslo_concurrency.lockutils [None req-cc2de22f-5e9e-4c72-bb31-c19a21fd203d a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:21:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:55.198+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:55 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:55 np0005592159 nova_compute[226433]: 2026-01-22 14:21:55.639 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:21:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:56.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:56 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:56.206+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:56.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:56 np0005592159 nova_compute[226433]: 2026-01-22 14:21:56.652 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:21:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:57.157+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:57 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:21:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:21:58.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:21:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:58.149+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:58 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:21:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:21:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:21:58.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:21:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:21:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:21:59.134+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:21:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:59 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:21:59 np0005592159 nova_compute[226433]: 2026-01-22 14:21:59.449 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:00.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:00.085+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:00.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:00 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:00 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 09:22:00 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 09:22:00 np0005592159 nova_compute[226433]: 2026-01-22 14:22:00.643 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:01.130+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:01 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:02.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:02.089+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:02.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:02 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:02 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 2707 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:03.090+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:03 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:22:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:04.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:04.116+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:04.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:04 np0005592159 nova_compute[226433]: 2026-01-22 14:22:04.506 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:04 np0005592159 nova_compute[226433]: 2026-01-22 14:22:04.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:04 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:05.152+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:05 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:05 np0005592159 nova_compute[226433]: 2026-01-22 14:22:05.679 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:06.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:06.196+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:06.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:06 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:07.147+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:07 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:07 np0005592159 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 2717 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:08.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:08.160+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:08.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:08 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:09.181+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:09 np0005592159 nova_compute[226433]: 2026-01-22 14:22:09.509 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:09 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:09 np0005592159 podman[244413]: 2026-01-22 14:22:09.995271672 +0000 UTC m=+0.058452830 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 09:22:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:10.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:10.178+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:10.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:10 np0005592159 nova_compute[226433]: 2026-01-22 14:22:10.681 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:10 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:10 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:11.211+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:12.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:12.207+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:12.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:13.187+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:14 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:14 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:14 np0005592159 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 2722 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000049s ======
Jan 22 09:22:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:14.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 22 09:22:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:14.138+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:14.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:14 np0005592159 nova_compute[226433]: 2026-01-22 14:22:14.511 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:15 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:15 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:15.145+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:15 np0005592159 nova_compute[226433]: 2026-01-22 14:22:15.683 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:16.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:16 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:16.105+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:16.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:17 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:17.062+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:18.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:18.057+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:18 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:18 np0005592159 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 2727 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:18.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:19.077+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:19 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:19 np0005592159 nova_compute[226433]: 2026-01-22 14:22:19.513 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:20.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:20.063+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:20.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:20 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:20 np0005592159 nova_compute[226433]: 2026-01-22 14:22:20.685 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:21.019+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:21 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:21.987+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:22.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:22:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:22.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:22:22 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:22.994+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:23 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:23 np0005592159 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 2733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:23.980+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:24 np0005592159 podman[244491]: 2026-01-22 14:22:24.026522751 +0000 UTC m=+0.087759392 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 09:22:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:22:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:24.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:22:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000048s ======
Jan 22 09:22:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:24.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000048s
Jan 22 09:22:24 np0005592159 nova_compute[226433]: 2026-01-22 14:22:24.516 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:24 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:24.942+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.032 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "8331b067-1b3f-4a1d-a596-e966f6de776a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.033 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "8331b067-1b3f-4a1d-a596-e966f6de776a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.051 226437 DEBUG nova.compute.manager [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.134 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.135 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.144 226437 DEBUG nova.virt.hardware [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.145 226437 INFO nova.compute.claims [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Claim successful on node compute-2.ctlplane.example.com#033[00m
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.355 226437 DEBUG oslo_concurrency.processutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:25 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.732 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:22:25 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4155437798' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.840 226437 DEBUG oslo_concurrency.processutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.846 226437 DEBUG nova.compute.provider_tree [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.868 226437 DEBUG nova.scheduler.client.report [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.903 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.904 226437 DEBUG nova.compute.manager [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.955 226437 DEBUG nova.compute.manager [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.956 226437 DEBUG nova.network.neutron [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:22:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:25.973+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:25 np0005592159 nova_compute[226433]: 2026-01-22 14:22:25.979 226437 INFO nova.virt.libvirt.driver [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.003 226437 DEBUG nova.compute.manager [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:22:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:26.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.097 226437 DEBUG nova.compute.manager [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.098 226437 DEBUG nova.virt.libvirt.driver [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.099 226437 INFO nova.virt.libvirt.driver [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Creating image(s)#033[00m
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.125 226437 DEBUG nova.storage.rbd_utils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 8331b067-1b3f-4a1d-a596-e966f6de776a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.152 226437 DEBUG nova.storage.rbd_utils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 8331b067-1b3f-4a1d-a596-e966f6de776a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.177 226437 DEBUG nova.storage.rbd_utils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 8331b067-1b3f-4a1d-a596-e966f6de776a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.180 226437 DEBUG oslo_concurrency.processutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.231 226437 DEBUG oslo_concurrency.processutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.232 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.233 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.233 226437 DEBUG oslo_concurrency.lockutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:26.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.256 226437 DEBUG nova.storage.rbd_utils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 8331b067-1b3f-4a1d-a596-e966f6de776a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.260 226437 DEBUG oslo_concurrency.processutils [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 8331b067-1b3f-4a1d-a596-e966f6de776a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:26 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.687 226437 DEBUG nova.network.neutron [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 22 09:22:26 np0005592159 nova_compute[226433]: 2026-01-22 14:22:26.688 226437 DEBUG nova.compute.manager [None req-13ccc3b9-c07d-4276-8e9f-c06323a7a7a7 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:22:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:26.930+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:27 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:27.946+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:28.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:28.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:28 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:28.947+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:29 np0005592159 nova_compute[226433]: 2026-01-22 14:22:29.518 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:29 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:29.905+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:30.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:30.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:30 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:30 np0005592159 nova_compute[226433]: 2026-01-22 14:22:30.735 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:30.939+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:31 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:31.900+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:32.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:32.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:32.862+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:33 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:33 np0005592159 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 2738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:33.899+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:34.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:34 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:22:34 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:22:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:34.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:22:34 np0005592159 nova_compute[226433]: 2026-01-22 14:22:34.520 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:34.913+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:35 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:35 np0005592159 nova_compute[226433]: 2026-01-22 14:22:35.774 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:35.948+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:36.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:36 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:22:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:22:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:22:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:36.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:36.994+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:37 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:37.968+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:38.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:38.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:38 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:38 np0005592159 ceph-mon[77081]: Health check update: 8 slow ops, oldest one blocked for 2748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:38.996+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:39 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:39 np0005592159 nova_compute[226433]: 2026-01-22 14:22:39.522 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:40.008+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:40.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:40.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:40 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:40 np0005592159 nova_compute[226433]: 2026-01-22 14:22:40.776 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:40.968+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:40 np0005592159 podman[244823]: 2026-01-22 14:22:40.991351203 +0000 UTC m=+0.052163416 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 09:22:41 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:22:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:22:41 np0005592159 nova_compute[226433]: 2026-01-22 14:22:41.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:41.933+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:42.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:42.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:42 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:42.962+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:43 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:43 np0005592159 ceph-mon[77081]: Health check update: 8 slow ops, oldest one blocked for 2753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:43 np0005592159 nova_compute[226433]: 2026-01-22 14:22:43.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:43 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:22:43.890 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:22:43 np0005592159 nova_compute[226433]: 2026-01-22 14:22:43.890 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:43 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:22:43.891 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:22:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:44.002+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:22:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:44.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:22:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:44.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:44 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:44 np0005592159 nova_compute[226433]: 2026-01-22 14:22:44.525 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:45.005+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:45 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:45 np0005592159 nova_compute[226433]: 2026-01-22 14:22:45.835 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:46.035+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:46.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:46.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:46 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:46 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:22:46.893 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:22:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:47.028+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:22:47.195 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:22:47.196 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:22:47.196 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #88. Immutable memtables: 0.
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.289939) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 88
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767290034, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 2049, "num_deletes": 256, "total_data_size": 3946951, "memory_usage": 4011536, "flush_reason": "Manual Compaction"}
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #89: started
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767306796, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 89, "file_size": 2581347, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45533, "largest_seqno": 47577, "table_properties": {"data_size": 2573555, "index_size": 4350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19900, "raw_average_key_size": 21, "raw_value_size": 2556335, "raw_average_value_size": 2699, "num_data_blocks": 188, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091627, "oldest_key_time": 1769091627, "file_creation_time": 1769091767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 16902 microseconds, and 6142 cpu microseconds.
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.306853) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #89: 2581347 bytes OK
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.306873) [db/memtable_list.cc:519] [default] Level-0 commit table #89 started
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.308728) [db/memtable_list.cc:722] [default] Level-0 commit table #89: memtable #1 done
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.308746) EVENT_LOG_v1 {"time_micros": 1769091767308741, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.308765) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 3937476, prev total WAL file size 3937476, number of live WAL files 2.
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000085.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.309785) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373538' seq:72057594037927935, type:22 .. '6C6F676D0032303130' seq:0, type:0; will stop at (end)
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [89(2520KB)], [87(9671KB)]
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767309821, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [89], "files_L6": [87], "score": -1, "input_data_size": 12485000, "oldest_snapshot_seqno": -1}
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #90: 9074 keys, 12329262 bytes, temperature: kUnknown
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767390460, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 90, "file_size": 12329262, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12273894, "index_size": 31576, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22725, "raw_key_size": 242720, "raw_average_key_size": 26, "raw_value_size": 12113724, "raw_average_value_size": 1334, "num_data_blocks": 1217, "num_entries": 9074, "num_filter_entries": 9074, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091767, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 90, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.390742) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 12329262 bytes
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.392054) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.6 rd, 152.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 9.4 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(9.6) write-amplify(4.8) OK, records in: 9599, records dropped: 525 output_compression: NoCompression
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.392068) EVENT_LOG_v1 {"time_micros": 1769091767392061, "job": 54, "event": "compaction_finished", "compaction_time_micros": 80754, "compaction_time_cpu_micros": 27020, "output_level": 6, "num_output_files": 1, "total_output_size": 12329262, "num_input_records": 9599, "num_output_records": 9074, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767392516, "job": 54, "event": "table_file_deletion", "file_number": 89}
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000087.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091767393976, "job": 54, "event": "table_file_deletion", "file_number": 87}
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.309693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.394038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.394043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.394045) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.394047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:22:47.394048) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:22:47 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:47 np0005592159 nova_compute[226433]: 2026-01-22 14:22:47.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:47 np0005592159 nova_compute[226433]: 2026-01-22 14:22:47.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:47 np0005592159 nova_compute[226433]: 2026-01-22 14:22:47.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:22:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:48.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:48.077+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:48.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:48 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:48 np0005592159 nova_compute[226433]: 2026-01-22 14:22:48.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:49.074+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:49 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:49 np0005592159 nova_compute[226433]: 2026-01-22 14:22:49.527 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:50.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:50.103+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:50.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:50 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:50 np0005592159 nova_compute[226433]: 2026-01-22 14:22:50.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:50 np0005592159 nova_compute[226433]: 2026-01-22 14:22:50.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:22:50 np0005592159 nova_compute[226433]: 2026-01-22 14:22:50.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:22:50 np0005592159 nova_compute[226433]: 2026-01-22 14:22:50.542 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:22:50 np0005592159 nova_compute[226433]: 2026-01-22 14:22:50.542 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:22:50 np0005592159 nova_compute[226433]: 2026-01-22 14:22:50.543 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:22:50 np0005592159 nova_compute[226433]: 2026-01-22 14:22:50.543 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:22:50 np0005592159 nova_compute[226433]: 2026-01-22 14:22:50.836 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:51.143+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:51 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:51 np0005592159 nova_compute[226433]: 2026-01-22 14:22:51.510 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:22:51 np0005592159 nova_compute[226433]: 2026-01-22 14:22:51.510 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:22:51 np0005592159 nova_compute[226433]: 2026-01-22 14:22:51.511 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:22:51 np0005592159 nova_compute[226433]: 2026-01-22 14:22:51.511 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:22:51 np0005592159 nova_compute[226433]: 2026-01-22 14:22:51.785 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:22:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:52.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:52.152+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:52.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:52 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:52 np0005592159 ceph-mon[77081]: Health check update: 8 slow ops, oldest one blocked for 2758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:52 np0005592159 nova_compute[226433]: 2026-01-22 14:22:52.779 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:22:52 np0005592159 nova_compute[226433]: 2026-01-22 14:22:52.800 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:22:52 np0005592159 nova_compute[226433]: 2026-01-22 14:22:52.801 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:22:52 np0005592159 nova_compute[226433]: 2026-01-22 14:22:52.802 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:52 np0005592159 nova_compute[226433]: 2026-01-22 14:22:52.802 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:52 np0005592159 nova_compute[226433]: 2026-01-22 14:22:52.832 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:52 np0005592159 nova_compute[226433]: 2026-01-22 14:22:52.833 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:52 np0005592159 nova_compute[226433]: 2026-01-22 14:22:52.833 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:52 np0005592159 nova_compute[226433]: 2026-01-22 14:22:52.833 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:22:52 np0005592159 nova_compute[226433]: 2026-01-22 14:22:52.833 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:53.125+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:22:53 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4134154296' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.285 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.391 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.391 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:22:53 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.588 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.589 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4544MB free_disk=20.771656036376953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.590 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.590 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.683 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.683 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.684 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.684 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.684 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.684 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.684 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=20GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:22:53 np0005592159 nova_compute[226433]: 2026-01-22 14:22:53.805 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:22:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:54.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:54.123+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:22:54 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3543596929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:22:54 np0005592159 nova_compute[226433]: 2026-01-22 14:22:54.249 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:22:54 np0005592159 nova_compute[226433]: 2026-01-22 14:22:54.256 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:22:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:54.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:54 np0005592159 nova_compute[226433]: 2026-01-22 14:22:54.274 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:22:54 np0005592159 nova_compute[226433]: 2026-01-22 14:22:54.294 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:22:54 np0005592159 nova_compute[226433]: 2026-01-22 14:22:54.294 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:22:54 np0005592159 nova_compute[226433]: 2026-01-22 14:22:54.528 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:54 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:55 np0005592159 podman[244944]: 2026-01-22 14:22:55.036900934 +0000 UTC m=+0.092069023 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 09:22:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:55.132+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:55 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:55 np0005592159 nova_compute[226433]: 2026-01-22 14:22:55.879 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:56 np0005592159 nova_compute[226433]: 2026-01-22 14:22:56.009 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:22:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:22:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:56.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:22:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:56.107+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:56.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:56 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:57.092+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:22:57 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:22:57 np0005592159 ceph-mon[77081]: Health check update: 8 slow ops, oldest one blocked for 2768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:22:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:22:58.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:58.140+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:22:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:22:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:22:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:22:58.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:22:58 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:22:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:22:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:22:59.099+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:22:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:22:59 np0005592159 nova_compute[226433]: 2026-01-22 14:22:59.530 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:22:59 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:00.076+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:00.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:00.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:00 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:00 np0005592159 nova_compute[226433]: 2026-01-22 14:23:00.881 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:01.047+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:01 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:02.043+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:02.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:02.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:02 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:03.008+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:03 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:03 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:04.040+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:04.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:04.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:04 np0005592159 nova_compute[226433]: 2026-01-22 14:23:04.533 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:04 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:05.004+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:05 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:05 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:05 np0005592159 nova_compute[226433]: 2026-01-22 14:23:05.936 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:05.960+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:06.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:06.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:06 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:06.912+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:07 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:07.924+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:08.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:08.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:08 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:08.935+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:09 np0005592159 nova_compute[226433]: 2026-01-22 14:23:09.534 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:09 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:09.910+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:10.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:10.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:10 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:10.932+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:10 np0005592159 nova_compute[226433]: 2026-01-22 14:23:10.940 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:11 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:11.919+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:11 np0005592159 podman[245030]: 2026-01-22 14:23:11.989327433 +0000 UTC m=+0.053358649 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 09:23:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:12.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:12.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:12 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:12 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:12.897+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:13 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:13.942+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:14.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:14.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:14 np0005592159 nova_compute[226433]: 2026-01-22 14:23:14.536 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:14 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:14.943+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:15.900+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:15 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:15 np0005592159 nova_compute[226433]: 2026-01-22 14:23:15.941 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:16.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:16.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:16.909+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:16 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:17.944+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:17 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:17 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #91. Immutable memtables: 0.
Jan 22 09:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:17.987426) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 91
Jan 22 09:23:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091797987485, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 662, "num_deletes": 251, "total_data_size": 873899, "memory_usage": 885624, "flush_reason": "Manual Compaction"}
Jan 22 09:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #92: started
Jan 22 09:23:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091797997747, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 92, "file_size": 573365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47582, "largest_seqno": 48239, "table_properties": {"data_size": 570254, "index_size": 955, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8063, "raw_average_key_size": 19, "raw_value_size": 563778, "raw_average_value_size": 1375, "num_data_blocks": 42, "num_entries": 410, "num_filter_entries": 410, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091767, "oldest_key_time": 1769091767, "file_creation_time": 1769091797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 10364 microseconds, and 4208 cpu microseconds.
Jan 22 09:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:17.997797) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #92: 573365 bytes OK
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:17.997820) [db/memtable_list.cc:519] [default] Level-0 commit table #92 started
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.003885) [db/memtable_list.cc:722] [default] Level-0 commit table #92: memtable #1 done
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.003913) EVENT_LOG_v1 {"time_micros": 1769091798003906, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.003937) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 870214, prev total WAL file size 870214, number of live WAL files 2.
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000088.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.004716) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [92(559KB)], [90(11MB)]
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798004764, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [92], "files_L6": [90], "score": -1, "input_data_size": 12902627, "oldest_snapshot_seqno": -1}
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #93: 8974 keys, 11173637 bytes, temperature: kUnknown
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798082010, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 93, "file_size": 11173637, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11119869, "index_size": 30232, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22469, "raw_key_size": 241529, "raw_average_key_size": 26, "raw_value_size": 10962080, "raw_average_value_size": 1221, "num_data_blocks": 1156, "num_entries": 8974, "num_filter_entries": 8974, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091798, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 93, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.082353) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 11173637 bytes
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.084362) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.8 rd, 144.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.8 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(42.0) write-amplify(19.5) OK, records in: 9484, records dropped: 510 output_compression: NoCompression
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.084393) EVENT_LOG_v1 {"time_micros": 1769091798084380, "job": 56, "event": "compaction_finished", "compaction_time_micros": 77337, "compaction_time_cpu_micros": 25005, "output_level": 6, "num_output_files": 1, "total_output_size": 11173637, "num_input_records": 9484, "num_output_records": 8974, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798084716, "job": 56, "event": "table_file_deletion", "file_number": 92}
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000090.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091798088725, "job": 56, "event": "table_file_deletion", "file_number": 90}
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.004653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:23:18.088863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:23:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:18.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 09:23:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:18.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:18.955+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:18 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:19 np0005592159 nova_compute[226433]: 2026-01-22 14:23:19.538 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:19.907+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:20 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:20.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:20.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:20.922+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:20 np0005592159 nova_compute[226433]: 2026-01-22 14:23:20.945 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:21 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:21.958+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:23:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:22.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:23:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:22.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:22 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:22.949+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:23 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:23 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:23.994+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:24.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:24.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:24 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:24 np0005592159 nova_compute[226433]: 2026-01-22 14:23:24.540 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:24.957+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:25 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:25.937+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:25 np0005592159 nova_compute[226433]: 2026-01-22 14:23:25.948 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:26 np0005592159 podman[245107]: 2026-01-22 14:23:26.04374261 +0000 UTC m=+0.095013042 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:23:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:26.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:26.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:26 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:26.967+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:27 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:27.974+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:23:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:28.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:23:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:28.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:28 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:29.019+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:29 np0005592159 nova_compute[226433]: 2026-01-22 14:23:29.542 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:29 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:30.020+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:30.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:23:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:30.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:23:30 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:30 np0005592159 nova_compute[226433]: 2026-01-22 14:23:30.951 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:31.040+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:31 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:32.032+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:23:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:32.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:23:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:32.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:32 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:32 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:33.012+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:33 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:34.041+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:34.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:34.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:34 np0005592159 nova_compute[226433]: 2026-01-22 14:23:34.544 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:34 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:34 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:35.015+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:35 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:35 np0005592159 nova_compute[226433]: 2026-01-22 14:23:35.954 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:36.021+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:36.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:36.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:36 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:37.066+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:37 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:37 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:38.083+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:38.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:38.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:38 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:39.067+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:39 np0005592159 nova_compute[226433]: 2026-01-22 14:23:39.546 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:39 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:40.088+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:40.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:40.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:40 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:40 np0005592159 nova_compute[226433]: 2026-01-22 14:23:40.956 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:41.054+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:41 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:42.008+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:42.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:42.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:42 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:42 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:42 np0005592159 podman[245324]: 2026-01-22 14:23:42.989431847 +0000 UTC m=+0.053800770 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 09:23:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:43.042+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:43 np0005592159 nova_compute[226433]: 2026-01-22 14:23:43.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:44.046+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:44.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:44.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:44 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:23:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:23:44 np0005592159 nova_compute[226433]: 2026-01-22 14:23:44.549 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:45.000+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:45 np0005592159 nova_compute[226433]: 2026-01-22 14:23:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:45 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:45 np0005592159 nova_compute[226433]: 2026-01-22 14:23:45.959 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:45.965+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:46.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:46.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:46 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:46.968+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:23:47.197 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:23:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:23:47.198 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:23:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:23:47.198 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:23:47 np0005592159 nova_compute[226433]: 2026-01-22 14:23:47.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:47 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:47 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:47.925+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:48.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:48.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:48 np0005592159 nova_compute[226433]: 2026-01-22 14:23:48.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:48 np0005592159 nova_compute[226433]: 2026-01-22 14:23:48.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:23:48 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:48.955+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:49 np0005592159 nova_compute[226433]: 2026-01-22 14:23:49.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:49 np0005592159 nova_compute[226433]: 2026-01-22 14:23:49.551 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:49 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:50.004+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:50.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:50.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:50 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:23:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:50.960+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:50 np0005592159 nova_compute[226433]: 2026-01-22 14:23:50.962 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:51 np0005592159 nova_compute[226433]: 2026-01-22 14:23:51.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:51 np0005592159 nova_compute[226433]: 2026-01-22 14:23:51.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:23:51 np0005592159 nova_compute[226433]: 2026-01-22 14:23:51.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:23:51 np0005592159 nova_compute[226433]: 2026-01-22 14:23:51.552 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:23:51 np0005592159 nova_compute[226433]: 2026-01-22 14:23:51.552 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:23:51 np0005592159 nova_compute[226433]: 2026-01-22 14:23:51.552 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:23:51 np0005592159 nova_compute[226433]: 2026-01-22 14:23:51.553 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:23:51 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:51.921+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:51 np0005592159 nova_compute[226433]: 2026-01-22 14:23:51.927 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:23:51 np0005592159 nova_compute[226433]: 2026-01-22 14:23:51.927 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:23:51 np0005592159 nova_compute[226433]: 2026-01-22 14:23:51.928 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:23:51 np0005592159 nova_compute[226433]: 2026-01-22 14:23:51.928 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:23:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:52.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:52 np0005592159 nova_compute[226433]: 2026-01-22 14:23:52.217 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:23:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:52.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:52 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:52.920+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.007 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.055 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.056 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.056 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.108 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.108 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.109 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.109 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.109 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:23:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:23:53 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/830754578' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.547 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.632 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.632 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:23:53 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:53 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.801 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.802 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4508MB free_disk=20.771656036376953GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.802 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.802 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:23:53 np0005592159 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 22 09:23:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.941 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.942 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.942 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.943 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.943 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.943 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:23:53 np0005592159 nova_compute[226433]: 2026-01-22 14:23:53.944 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=20GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:23:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:53.966+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:54 np0005592159 nova_compute[226433]: 2026-01-22 14:23:54.084 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:23:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:54.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:54.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:23:54 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2625343517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:23:54 np0005592159 nova_compute[226433]: 2026-01-22 14:23:54.549 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:23:54 np0005592159 nova_compute[226433]: 2026-01-22 14:23:54.557 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:23:54 np0005592159 nova_compute[226433]: 2026-01-22 14:23:54.579 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:23:54 np0005592159 nova_compute[226433]: 2026-01-22 14:23:54.580 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:23:54 np0005592159 nova_compute[226433]: 2026-01-22 14:23:54.580 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:23:54 np0005592159 nova_compute[226433]: 2026-01-22 14:23:54.590 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:54 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:54.972+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:55 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:55.935+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:55 np0005592159 nova_compute[226433]: 2026-01-22 14:23:55.964 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:56 np0005592159 nova_compute[226433]: 2026-01-22 14:23:56.041 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:56 np0005592159 nova_compute[226433]: 2026-01-22 14:23:56.041 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:23:56 np0005592159 nova_compute[226433]: 2026-01-22 14:23:56.086 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:23:56.086 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:23:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:23:56.088 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:23:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:56.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:56.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:56 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:56.961+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:57 np0005592159 podman[245495]: 2026-01-22 14:23:57.012014413 +0000 UTC m=+0.076125457 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:23:57 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:57.915+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:23:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:23:58.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:23:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:23:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:23:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:23:58.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:23:58 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:23:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:58.966+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:59 np0005592159 nova_compute[226433]: 2026-01-22 14:23:59.591 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:23:59 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:23:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:23:59.979+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:23:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:00.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:00.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:00 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:00 np0005592159 nova_compute[226433]: 2026-01-22 14:24:00.967 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:01.028+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:01 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:01 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:01.992+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:02.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:02.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:02 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:02 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:02.967+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:03 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:03.976+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:04 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:24:04.090 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:24:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:24:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:04.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:24:04 np0005592159 nova_compute[226433]: 2026-01-22 14:24:04.646 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:04 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:04.966+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:05 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:05.981+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:05 np0005592159 nova_compute[226433]: 2026-01-22 14:24:05.987 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:24:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:06.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:24:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:06.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:06 np0005592159 nova_compute[226433]: 2026-01-22 14:24:06.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:06 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:06.973+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:07 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:07 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:07.939+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:24:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:08.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:24:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:08.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:08 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:08.953+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:09 np0005592159 nova_compute[226433]: 2026-01-22 14:24:09.647 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:09.949+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:10.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:10 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:10.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:10.973+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:10 np0005592159 nova_compute[226433]: 2026-01-22 14:24:10.989 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:11 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:11.992+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:12.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:12.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:12 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:12.979+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:13 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:13 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:13 np0005592159 podman[245529]: 2026-01-22 14:24:13.998113935 +0000 UTC m=+0.064252360 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 22 09:24:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:14.020+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:14.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:14.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:14 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:14 np0005592159 nova_compute[226433]: 2026-01-22 14:24:14.650 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:15.064+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:15 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:15 np0005592159 nova_compute[226433]: 2026-01-22 14:24:15.993 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:16.058+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:24:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:16.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:24:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:16.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:16 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:17.043+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.085 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "a0b3924b-4422-47c5-ba40-748e41b14d00" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.086 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "a0b3924b-4422-47c5-ba40-748e41b14d00" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.109 226437 DEBUG nova.compute.manager [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.207 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.207 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.215 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.215 226437 INFO nova.compute.claims [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Claim successful on node compute-2.ctlplane.example.com#033[00m
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.444 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:24:17 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:17 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:24:17 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/446750844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.874 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.879 226437 DEBUG nova.compute.provider_tree [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.901 226437 DEBUG nova.scheduler.client.report [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.932 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.933 226437 DEBUG nova.compute.manager [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.988 226437 DEBUG nova.compute.manager [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:24:17 np0005592159 nova_compute[226433]: 2026-01-22 14:24:17.988 226437 DEBUG nova.network.neutron [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.020 226437 INFO nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.055 226437 DEBUG nova.compute.manager [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:24:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:18.071+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.183 226437 DEBUG nova.compute.manager [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.185 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.185 226437 INFO nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Creating image(s)#033[00m
Jan 22 09:24:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:18.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.221 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.266 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.311 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.319 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:24:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:18.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.385 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.387 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.387 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.388 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.423 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.428 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 a0b3924b-4422-47c5-ba40-748e41b14d00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:24:18 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.659 226437 DEBUG nova.policy [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b8229aedbc64b9691880a91d559e987', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7efa67e548af42419a603e06c3b85f6d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.703 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 a0b3924b-4422-47c5-ba40-748e41b14d00_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.805 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] resizing rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 22 09:24:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.921 226437 DEBUG nova.objects.instance [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lazy-loading 'migration_context' on Instance uuid a0b3924b-4422-47c5-ba40-748e41b14d00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.946 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.947 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Ensure instance console log exists: /var/lib/nova/instances/a0b3924b-4422-47c5-ba40-748e41b14d00/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.948 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.948 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:24:18 np0005592159 nova_compute[226433]: 2026-01-22 14:24:18.948 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:24:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:19.094+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:19 np0005592159 nova_compute[226433]: 2026-01-22 14:24:19.651 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:19 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:20.128+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:20.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:20.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:20 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:20 np0005592159 nova_compute[226433]: 2026-01-22 14:24:20.759 226437 DEBUG nova.network.neutron [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Successfully updated port: 982269cf-4df1-4bc7-9b49-f0de807afdd7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 22 09:24:20 np0005592159 nova_compute[226433]: 2026-01-22 14:24:20.783 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "refresh_cache-a0b3924b-4422-47c5-ba40-748e41b14d00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:24:20 np0005592159 nova_compute[226433]: 2026-01-22 14:24:20.784 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquired lock "refresh_cache-a0b3924b-4422-47c5-ba40-748e41b14d00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:24:20 np0005592159 nova_compute[226433]: 2026-01-22 14:24:20.784 226437 DEBUG nova.network.neutron [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:24:20 np0005592159 nova_compute[226433]: 2026-01-22 14:24:20.996 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:21 np0005592159 nova_compute[226433]: 2026-01-22 14:24:21.145 226437 DEBUG nova.network.neutron [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:24:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:21.158+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:21 np0005592159 nova_compute[226433]: 2026-01-22 14:24:21.346 226437 DEBUG nova.compute.manager [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Received event network-changed-982269cf-4df1-4bc7-9b49-f0de807afdd7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:24:21 np0005592159 nova_compute[226433]: 2026-01-22 14:24:21.347 226437 DEBUG nova.compute.manager [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Refreshing instance network info cache due to event network-changed-982269cf-4df1-4bc7-9b49-f0de807afdd7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 09:24:21 np0005592159 nova_compute[226433]: 2026-01-22 14:24:21.347 226437 DEBUG oslo_concurrency.lockutils [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-a0b3924b-4422-47c5-ba40-748e41b14d00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:24:21 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:22.193+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:22.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:22.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:22 np0005592159 nova_compute[226433]: 2026-01-22 14:24:22.756 226437 DEBUG nova.network.neutron [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Updating instance_info_cache with network_info: [{"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:24:22 np0005592159 nova_compute[226433]: 2026-01-22 14:24:22.789 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Releasing lock "refresh_cache-a0b3924b-4422-47c5-ba40-748e41b14d00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:24:22 np0005592159 nova_compute[226433]: 2026-01-22 14:24:22.789 226437 DEBUG nova.compute.manager [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Instance network_info: |[{"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:24:22 np0005592159 nova_compute[226433]: 2026-01-22 14:24:22.790 226437 DEBUG oslo_concurrency.lockutils [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-a0b3924b-4422-47c5-ba40-748e41b14d00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:24:22 np0005592159 nova_compute[226433]: 2026-01-22 14:24:22.790 226437 DEBUG nova.network.neutron [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Refreshing network info cache for port 982269cf-4df1-4bc7-9b49-f0de807afdd7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 09:24:22 np0005592159 nova_compute[226433]: 2026-01-22 14:24:22.795 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Start _get_guest_xml network_info=[{"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.031 226437 WARNING nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:24:23 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:23 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.037 226437 DEBUG nova.virt.libvirt.host [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.038 226437 DEBUG nova.virt.libvirt.host [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.042 226437 DEBUG nova.virt.libvirt.host [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.042 226437 DEBUG nova.virt.libvirt.host [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.044 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.044 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.044 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.045 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.045 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.045 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.045 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.046 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.046 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.046 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.046 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.047 226437 DEBUG nova.virt.hardware [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.050 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:24:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:23.181+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:24:23 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2112882646' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.508 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.531 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.537 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:24:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:24:23 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1383228421' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.990 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.992 226437 DEBUG nova.virt.libvirt.vif [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:24:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1971220718',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1971220718',id=17,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7efa67e548af42419a603e06c3b85f6d',ramdisk_id='',reservation_id='r-ongku9tq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1914209315',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1914209315-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:24:18Z,user_data=None,user_id='3b8229aedbc64b9691880a91d559e987',uuid=a0b3924b-4422-47c5-ba40-748e41b14d00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.993 226437 DEBUG nova.network.os_vif_util [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Converting VIF {"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.994 226437 DEBUG nova.network.os_vif_util [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:98:da,bridge_name='br-int',has_traffic_filtering=True,id=982269cf-4df1-4bc7-9b49-f0de807afdd7,network=Network(2b0f60bf-d43c-499d-bf6b-aded338e0ecf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap982269cf-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:24:23 np0005592159 nova_compute[226433]: 2026-01-22 14:24:23.995 226437 DEBUG nova.objects.instance [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lazy-loading 'pci_devices' on Instance uuid a0b3924b-4422-47c5-ba40-748e41b14d00 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.017 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] End _get_guest_xml xml=<domain type="kvm">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  <uuid>a0b3924b-4422-47c5-ba40-748e41b14d00</uuid>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  <name>instance-00000011</name>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  <memory>131072</memory>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  <vcpu>1</vcpu>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  <metadata>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <nova:name>tempest-LiveAutoBlockMigrationV225Test-server-1971220718</nova:name>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <nova:creationTime>2026-01-22 14:24:23</nova:creationTime>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <nova:flavor name="m1.nano">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <nova:memory>128</nova:memory>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <nova:disk>1</nova:disk>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <nova:swap>0</nova:swap>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <nova:vcpus>1</nova:vcpus>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      </nova:flavor>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <nova:owner>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <nova:user uuid="3b8229aedbc64b9691880a91d559e987">tempest-LiveAutoBlockMigrationV225Test-1914209315-project-member</nova:user>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <nova:project uuid="7efa67e548af42419a603e06c3b85f6d">tempest-LiveAutoBlockMigrationV225Test-1914209315</nova:project>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      </nova:owner>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <nova:ports>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <nova:port uuid="982269cf-4df1-4bc7-9b49-f0de807afdd7">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        </nova:port>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      </nova:ports>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    </nova:instance>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  </metadata>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  <sysinfo type="smbios">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <system>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <entry name="manufacturer">RDO</entry>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <entry name="product">OpenStack Compute</entry>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <entry name="serial">a0b3924b-4422-47c5-ba40-748e41b14d00</entry>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <entry name="uuid">a0b3924b-4422-47c5-ba40-748e41b14d00</entry>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <entry name="family">Virtual Machine</entry>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    </system>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  </sysinfo>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  <os>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <boot dev="hd"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <smbios mode="sysinfo"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  </os>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  <features>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <acpi/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <apic/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <vmcoreinfo/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  </features>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  <clock offset="utc">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <timer name="hpet" present="no"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  </clock>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  <cpu mode="custom" match="exact">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <model>Nehalem</model>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  </cpu>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  <devices>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <disk type="network" device="disk">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <driver type="raw" cache="none"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <source protocol="rbd" name="vms/a0b3924b-4422-47c5-ba40-748e41b14d00_disk">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      </source>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <auth username="openstack">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      </auth>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <target dev="vda" bus="virtio"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    </disk>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <disk type="network" device="cdrom">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <driver type="raw" cache="none"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <source protocol="rbd" name="vms/a0b3924b-4422-47c5-ba40-748e41b14d00_disk.config">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      </source>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <auth username="openstack">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      </auth>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <target dev="sda" bus="sata"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    </disk>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <interface type="ethernet">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <mac address="fa:16:3e:03:98:da"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <model type="virtio"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <driver name="vhost" rx_queue_size="512"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <mtu size="1442"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <target dev="tap982269cf-4d"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    </interface>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <serial type="pty">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <log file="/var/lib/nova/instances/a0b3924b-4422-47c5-ba40-748e41b14d00/console.log" append="off"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    </serial>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <video>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <model type="virtio"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    </video>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <input type="tablet" bus="usb"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <rng model="virtio">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <backend model="random">/dev/urandom</backend>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    </rng>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <controller type="usb" index="0"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    <memballoon model="virtio">
Jan 22 09:24:24 np0005592159 nova_compute[226433]:      <stats period="10"/>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:    </memballoon>
Jan 22 09:24:24 np0005592159 nova_compute[226433]:  </devices>
Jan 22 09:24:24 np0005592159 nova_compute[226433]: </domain>
Jan 22 09:24:24 np0005592159 nova_compute[226433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.018 226437 DEBUG nova.compute.manager [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Preparing to wait for external event network-vif-plugged-982269cf-4df1-4bc7-9b49-f0de807afdd7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.019 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "a0b3924b-4422-47c5-ba40-748e41b14d00-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.019 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "a0b3924b-4422-47c5-ba40-748e41b14d00-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.020 226437 DEBUG oslo_concurrency.lockutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "a0b3924b-4422-47c5-ba40-748e41b14d00-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.020 226437 DEBUG nova.virt.libvirt.vif [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-22T14:24:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-LiveAutoBlockMigrationV225Test-server-1971220718',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-liveautoblockmigrationv225test-server-1971220718',id=17,image_ref='dc084f46-456d-429d-85f6-836af4fccd82',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7efa67e548af42419a603e06c3b85f6d',ramdisk_id='',reservation_id='r-ongku9tq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='dc084f46-456d-429d-85f6-836af4fccd82',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-LiveAutoBlockMigrationV225Test-1914209315',owner_user_name='tempest-LiveAutoBlockMigrationV225Test-1914209315-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:24:18Z,user_data=None,user_id='3b8229aedbc64b9691880a91d559e987',uuid=a0b3924b-4422-47c5-ba40-748e41b14d00,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.021 226437 DEBUG nova.network.os_vif_util [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Converting VIF {"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.021 226437 DEBUG nova.network.os_vif_util [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:98:da,bridge_name='br-int',has_traffic_filtering=True,id=982269cf-4df1-4bc7-9b49-f0de807afdd7,network=Network(2b0f60bf-d43c-499d-bf6b-aded338e0ecf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap982269cf-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.022 226437 DEBUG os_vif [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:98:da,bridge_name='br-int',has_traffic_filtering=True,id=982269cf-4df1-4bc7-9b49-f0de807afdd7,network=Network(2b0f60bf-d43c-499d-bf6b-aded338e0ecf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap982269cf-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.023 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.023 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.024 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.027 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.027 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap982269cf-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.027 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap982269cf-4d, col_values=(('external_ids', {'iface-id': '982269cf-4df1-4bc7-9b49-f0de807afdd7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:98:da', 'vm-uuid': 'a0b3924b-4422-47c5-ba40-748e41b14d00'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:24:24 np0005592159 NetworkManager[49000]: <info>  [1769091864.0304] manager: (tap982269cf-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.029 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.033 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.036 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.036 226437 INFO os_vif [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:98:da,bridge_name='br-int',has_traffic_filtering=True,id=982269cf-4df1-4bc7-9b49-f0de807afdd7,network=Network(2b0f60bf-d43c-499d-bf6b-aded338e0ecf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap982269cf-4d')#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.085 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.086 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.086 226437 DEBUG nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] No VIF found with MAC fa:16:3e:03:98:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.086 226437 INFO nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Using config drive#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.106 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:24:24 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:24 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:24.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:24.208+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:24.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.689 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.750 226437 INFO nova.virt.libvirt.driver [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Creating config drive at /var/lib/nova/instances/a0b3924b-4422-47c5-ba40-748e41b14d00/disk.config#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.755 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a0b3924b-4422-47c5-ba40-748e41b14d00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyzsoys_h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.797 226437 DEBUG nova.network.neutron [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Updated VIF entry in instance network info cache for port 982269cf-4df1-4bc7-9b49-f0de807afdd7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.798 226437 DEBUG nova.network.neutron [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Updating instance_info_cache with network_info: [{"id": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "address": "fa:16:3e:03:98:da", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap982269cf-4d", "ovs_interfaceid": "982269cf-4df1-4bc7-9b49-f0de807afdd7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.818 226437 DEBUG oslo_concurrency.lockutils [req-82ffc7de-be1d-4f99-a560-2dcd7c63b61b req-656f27bd-1c7c-49d5-bb9b-cdaf431aadab 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-a0b3924b-4422-47c5-ba40-748e41b14d00" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.876 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a0b3924b-4422-47c5-ba40-748e41b14d00/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyzsoys_h" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.900 226437 DEBUG nova.storage.rbd_utils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] rbd image a0b3924b-4422-47c5-ba40-748e41b14d00_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:24:24 np0005592159 nova_compute[226433]: 2026-01-22 14:24:24.904 226437 DEBUG oslo_concurrency.processutils [None req-41095c05-6002-49ba-8019-bfcbe0dfe7e0 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a0b3924b-4422-47c5-ba40-748e41b14d00/disk.config a0b3924b-4422-47c5-ba40-748e41b14d00_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:24:25 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:25.205+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:26 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:26.190+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:26.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:26.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:27 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:27.228+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:28 np0005592159 podman[245914]: 2026-01-22 14:24:28.019662523 +0000 UTC m=+0.077964526 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 09:24:28 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:28 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:28.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:28.233+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:28.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:29 np0005592159 nova_compute[226433]: 2026-01-22 14:24:29.031 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:29 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:29.220+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:29 np0005592159 nova_compute[226433]: 2026-01-22 14:24:29.691 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:30 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:30.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:30.217+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:30.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:31 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:31.257+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:32.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:32.253+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #94. Immutable memtables: 0.
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.323760) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 94
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872323795, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 1249, "num_deletes": 251, "total_data_size": 2152088, "memory_usage": 2194728, "flush_reason": "Manual Compaction"}
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #95: started
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872331901, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 95, "file_size": 922025, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48245, "largest_seqno": 49488, "table_properties": {"data_size": 917757, "index_size": 1664, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13046, "raw_average_key_size": 21, "raw_value_size": 907787, "raw_average_value_size": 1500, "num_data_blocks": 72, "num_entries": 605, "num_filter_entries": 605, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091798, "oldest_key_time": 1769091798, "file_creation_time": 1769091872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 8203 microseconds, and 3860 cpu microseconds.
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.331949) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #95: 922025 bytes OK
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.331983) [db/memtable_list.cc:519] [default] Level-0 commit table #95 started
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.333757) [db/memtable_list.cc:722] [default] Level-0 commit table #95: memtable #1 done
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.333771) EVENT_LOG_v1 {"time_micros": 1769091872333767, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.333789) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 2145974, prev total WAL file size 2145974, number of live WAL files 2.
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000091.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.334553) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323534' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [95(900KB)], [93(10MB)]
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872334605, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [95], "files_L6": [93], "score": -1, "input_data_size": 12095662, "oldest_snapshot_seqno": -1}
Jan 22 09:24:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:32.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #96: 9095 keys, 8667418 bytes, temperature: kUnknown
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872390193, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 96, "file_size": 8667418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8616931, "index_size": 26631, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 244727, "raw_average_key_size": 26, "raw_value_size": 8461079, "raw_average_value_size": 930, "num_data_blocks": 1006, "num_entries": 9095, "num_filter_entries": 9095, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091872, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 96, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.390951) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8667418 bytes
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.393024) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 215.7 rd, 154.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(22.5) write-amplify(9.4) OK, records in: 9579, records dropped: 484 output_compression: NoCompression
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.393080) EVENT_LOG_v1 {"time_micros": 1769091872393064, "job": 58, "event": "compaction_finished", "compaction_time_micros": 56083, "compaction_time_cpu_micros": 21524, "output_level": 6, "num_output_files": 1, "total_output_size": 8667418, "num_input_records": 9579, "num_output_records": 9095, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872393466, "job": 58, "event": "table_file_deletion", "file_number": 95}
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000093.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091872395261, "job": 58, "event": "table_file_deletion", "file_number": 93}
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.334456) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.395365) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.395372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.395373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.395375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:24:32.395376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:24:33 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:33 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:33.255+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:34 np0005592159 nova_compute[226433]: 2026-01-22 14:24:34.034 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:34.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:34.241+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:34 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:34.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:34 np0005592159 nova_compute[226433]: 2026-01-22 14:24:34.694 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:35.202+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:35 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:36.204+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:24:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:36.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:24:36 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:36.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:37.208+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:37 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000049s ======
Jan 22 09:24:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:38.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000049s
Jan 22 09:24:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:38.226+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:38 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:38 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:38.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:39 np0005592159 nova_compute[226433]: 2026-01-22 14:24:39.037 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:39.262+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:39 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:39 np0005592159 nova_compute[226433]: 2026-01-22 14:24:39.696 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000049s ======
Jan 22 09:24:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:40.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000049s
Jan 22 09:24:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:40.251+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:40 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:40.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:41.257+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:41 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000049s ======
Jan 22 09:24:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:42.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000049s
Jan 22 09:24:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:42.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:42 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:42.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:43.292+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:43 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:43 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2872 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:44 np0005592159 nova_compute[226433]: 2026-01-22 14:24:44.040 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:44.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:44.290+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000048s ======
Jan 22 09:24:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:44.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000048s
Jan 22 09:24:44 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:44 np0005592159 nova_compute[226433]: 2026-01-22 14:24:44.697 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:45 np0005592159 podman[246002]: 2026-01-22 14:24:45.029183061 +0000 UTC m=+0.086648586 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 09:24:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:45.273+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:45 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:45 np0005592159 nova_compute[226433]: 2026-01-22 14:24:45.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:46.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:46.312+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000048s ======
Jan 22 09:24:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:46.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000048s
Jan 22 09:24:46 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:46 np0005592159 nova_compute[226433]: 2026-01-22 14:24:46.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:24:47.198 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:24:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:24:47.198 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:24:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:24:47.199 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:24:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:47.299+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:47 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000048s ======
Jan 22 09:24:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:48.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000048s
Jan 22 09:24:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:48.287+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:48.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:48 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:49 np0005592159 nova_compute[226433]: 2026-01-22 14:24:49.043 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:49.273+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:49 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:49 np0005592159 nova_compute[226433]: 2026-01-22 14:24:49.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:49 np0005592159 nova_compute[226433]: 2026-01-22 14:24:49.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:49 np0005592159 nova_compute[226433]: 2026-01-22 14:24:49.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:49 np0005592159 nova_compute[226433]: 2026-01-22 14:24:49.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:24:49 np0005592159 nova_compute[226433]: 2026-01-22 14:24:49.700 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:50.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:50.278+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:50.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:50 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:51.247+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:51 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:51 np0005592159 nova_compute[226433]: 2026-01-22 14:24:51.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:51 np0005592159 nova_compute[226433]: 2026-01-22 14:24:51.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:24:51 np0005592159 nova_compute[226433]: 2026-01-22 14:24:51.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:24:51 np0005592159 nova_compute[226433]: 2026-01-22 14:24:51.552 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:24:51 np0005592159 nova_compute[226433]: 2026-01-22 14:24:51.552 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:24:51 np0005592159 nova_compute[226433]: 2026-01-22 14:24:51.552 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:24:51 np0005592159 nova_compute[226433]: 2026-01-22 14:24:51.553 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:24:51 np0005592159 nova_compute[226433]: 2026-01-22 14:24:51.553 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:24:51 np0005592159 nova_compute[226433]: 2026-01-22 14:24:51.813 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:24:51 np0005592159 nova_compute[226433]: 2026-01-22 14:24:51.813 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:24:51 np0005592159 nova_compute[226433]: 2026-01-22 14:24:51.814 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:24:51 np0005592159 nova_compute[226433]: 2026-01-22 14:24:51.814 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:24:52 np0005592159 nova_compute[226433]: 2026-01-22 14:24:52.099 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:24:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:52.203+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:52.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:52.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:52 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:52 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:52 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:53 np0005592159 nova_compute[226433]: 2026-01-22 14:24:53.014 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:24:53 np0005592159 nova_compute[226433]: 2026-01-22 14:24:53.032 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:24:53 np0005592159 nova_compute[226433]: 2026-01-22 14:24:53.032 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:24:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:53.250+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:53 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:24:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:24:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:54 np0005592159 nova_compute[226433]: 2026-01-22 14:24:54.049 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:54.218+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 29 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:54.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:54.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:54 np0005592159 nova_compute[226433]: 2026-01-22 14:24:54.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:54 np0005592159 nova_compute[226433]: 2026-01-22 14:24:54.552 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:24:54 np0005592159 nova_compute[226433]: 2026-01-22 14:24:54.553 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:24:54 np0005592159 nova_compute[226433]: 2026-01-22 14:24:54.553 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:24:54 np0005592159 nova_compute[226433]: 2026-01-22 14:24:54.554 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:24:54 np0005592159 nova_compute[226433]: 2026-01-22 14:24:54.554 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:24:54 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:54 np0005592159 nova_compute[226433]: 2026-01-22 14:24:54.703 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:24:54 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3002855907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:24:54 np0005592159 nova_compute[226433]: 2026-01-22 14:24:54.972 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.043 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.043 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.047 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.047 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.204 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.205 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4486MB free_disk=20.750900268554688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.206 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.206 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:24:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:55.225+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.347 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.347 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.347 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.347 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.348 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.348 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.348 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.348 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.528 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:24:55 np0005592159 ceph-mon[77081]: 29 slow requests (by type [ 'delayed' : 29 ] most affected pool [ 'vms' : 24 ])
Jan 22 09:24:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:24:55 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/311209811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.941 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.413s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.947 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.968 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.996 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:24:55 np0005592159 nova_compute[226433]: 2026-01-22 14:24:55.996 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:24:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:56.186+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:56.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:56.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:56 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:56 np0005592159 nova_compute[226433]: 2026-01-22 14:24:56.997 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:56 np0005592159 nova_compute[226433]: 2026-01-22 14:24:56.997 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:24:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:57.213+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:57 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:57 np0005592159 ceph-mon[77081]: Health check update: 29 slow ops, oldest one blocked for 2887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:24:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:58.176+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:24:58.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:24:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:24:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:24:58.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:24:58 np0005592159 podman[246278]: 2026-01-22 14:24:58.442299886 +0000 UTC m=+0.110748027 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:24:58 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:24:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:24:59 np0005592159 nova_compute[226433]: 2026-01-22 14:24:59.052 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:24:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:24:59.217+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:24:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:59 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:24:59 np0005592159 nova_compute[226433]: 2026-01-22 14:24:59.705 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:00.250+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:00.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:00.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:00 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:01.242+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:01 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:02.250+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:02.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:02.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:02 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:03.248+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:03 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:03 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:04 np0005592159 nova_compute[226433]: 2026-01-22 14:25:04.057 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:04.205+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000049s ======
Jan 22 09:25:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:04.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000049s
Jan 22 09:25:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:04.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:04 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:04 np0005592159 nova_compute[226433]: 2026-01-22 14:25:04.706 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:05.250+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:05 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000048s ======
Jan 22 09:25:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:06.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000048s
Jan 22 09:25:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:06.262+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:06.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:06 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:07.269+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:07 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:08.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:08.271+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:08.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:08 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:09 np0005592159 nova_compute[226433]: 2026-01-22 14:25:09.060 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:09.275+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:09 np0005592159 nova_compute[226433]: 2026-01-22 14:25:09.708 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:09 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:10.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000048s ======
Jan 22 09:25:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:10.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000048s
Jan 22 09:25:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:10.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:10 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:11.233+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:12 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:12.265+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000049s ======
Jan 22 09:25:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:12.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000049s
Jan 22 09:25:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:12.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:13 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:13 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:13 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:13.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:14 np0005592159 nova_compute[226433]: 2026-01-22 14:25:14.063 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:14 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:14.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:14.342+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:14.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:14 np0005592159 nova_compute[226433]: 2026-01-22 14:25:14.711 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:15 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:15.363+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:15 np0005592159 podman[246340]: 2026-01-22 14:25:15.985925135 +0000 UTC m=+0.051514485 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 09:25:16 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:16.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:16.335+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:16.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:17 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:17.343+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:18 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:18 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:18.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:18.385+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:18.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:19 np0005592159 nova_compute[226433]: 2026-01-22 14:25:19.112 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:19 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:19.411+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:19 np0005592159 nova_compute[226433]: 2026-01-22 14:25:19.713 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:20 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000049s ======
Jan 22 09:25:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:20.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000049s
Jan 22 09:25:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:20.373+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000048s ======
Jan 22 09:25:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:20.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000048s
Jan 22 09:25:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:21.488+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000049s ======
Jan 22 09:25:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:22.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000049s
Jan 22 09:25:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:22.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:22 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:22 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:22.508+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:23 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:23 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:23.557+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:24 np0005592159 nova_compute[226433]: 2026-01-22 14:25:24.115 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:24.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:24.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:24 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:24.575+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:24 np0005592159 nova_compute[226433]: 2026-01-22 14:25:24.751 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:25 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:25.574+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:26.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:26.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:26 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:26.548+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:27 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:27.580+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:28.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:28.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:28.578+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:28 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:25:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 9273 writes, 50K keys, 9273 commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 9273 writes, 9273 syncs, 1.00 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1835 writes, 9473 keys, 1835 commit groups, 1.0 writes per commit group, ingest: 16.37 MB, 0.03 MB/s#012Interval WAL: 1835 writes, 1835 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     75.0      0.72              0.17        29    0.025       0      0       0.0       0.0#012  L6      1/0    8.27 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.6    134.2    113.1      2.18              0.69        28    0.078    199K    15K       0.0       0.0#012 Sum      1/0    8.27 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.6    100.8    103.6      2.90              0.87        57    0.051    199K    15K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8    102.5    100.0      0.75              0.20        14    0.053     64K   3548       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    134.2    113.1      2.18              0.69        28    0.078    199K    15K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     75.4      0.72              0.17        28    0.026       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3000.0 total, 600.0 interval#012Flush(GB): cumulative 0.053, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.29 GB write, 0.10 MB/s write, 0.29 GB read, 0.10 MB/s read, 2.9 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 31.40 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000355 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1660,30.06 MB,9.88744%) FilterBlock(57,569.98 KB,0.1831%) IndexBlock(57,805.67 KB,0.258812%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:25:29 np0005592159 podman[246418]: 2026-01-22 14:25:29.058127816 +0000 UTC m=+0.111665300 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:25:29 np0005592159 nova_compute[226433]: 2026-01-22 14:25:29.117 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:29.560+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:29 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:29 np0005592159 nova_compute[226433]: 2026-01-22 14:25:29.753 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:30.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:30.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:30.515+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:30 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:31.478+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:31 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:32.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:32.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:32.489+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:32 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:32 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:32 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:33.449+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:33 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:34 np0005592159 nova_compute[226433]: 2026-01-22 14:25:34.148 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:25:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:34.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:25:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:25:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:34.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:25:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:34.475+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:34 np0005592159 nova_compute[226433]: 2026-01-22 14:25:34.755 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:34 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:35.446+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:35 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:36.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:36.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:36.449+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:36 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:37.495+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:37 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2927 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:37 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:38.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:38.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:38.508+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:38 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:39 np0005592159 nova_compute[226433]: 2026-01-22 14:25:39.152 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:39.517+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:39 np0005592159 nova_compute[226433]: 2026-01-22 14:25:39.800 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:39 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:40.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:40.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:40.538+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:40 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:41.544+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:41 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:42.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:42.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:42.530+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:43 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:43 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:43.579+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:44 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:44 np0005592159 nova_compute[226433]: 2026-01-22 14:25:44.201 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:44.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:25:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:44.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:25:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:44.575+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:44 np0005592159 nova_compute[226433]: 2026-01-22 14:25:44.802 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:45 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:45.608+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:46 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:46.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:46.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:46 np0005592159 nova_compute[226433]: 2026-01-22 14:25:46.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:46.562+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:47 np0005592159 podman[246502]: 2026-01-22 14:25:47.00408548 +0000 UTC m=+0.067589801 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 09:25:47 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:25:47.199 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:25:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:25:47.199 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:25:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:25:47.200 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:25:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:47.581+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:48 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:48 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:48.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:48.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:48 np0005592159 nova_compute[226433]: 2026-01-22 14:25:48.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:48.560+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:49 np0005592159 nova_compute[226433]: 2026-01-22 14:25:49.225 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:49 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:49 np0005592159 nova_compute[226433]: 2026-01-22 14:25:49.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:49.532+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:49 np0005592159 nova_compute[226433]: 2026-01-22 14:25:49.804 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:50 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:25:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:50.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:25:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:50.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:50 np0005592159 nova_compute[226433]: 2026-01-22 14:25:50.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:50.530+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:51 np0005592159 nova_compute[226433]: 2026-01-22 14:25:51.156 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:51 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:51.575+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:51 np0005592159 nova_compute[226433]: 2026-01-22 14:25:51.590 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:51 np0005592159 nova_compute[226433]: 2026-01-22 14:25:51.591 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:25:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:52.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:52 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:52.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:52 np0005592159 nova_compute[226433]: 2026-01-22 14:25:52.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:52 np0005592159 nova_compute[226433]: 2026-01-22 14:25:52.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 09:25:52 np0005592159 nova_compute[226433]: 2026-01-22 14:25:52.534 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 09:25:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:52.543+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:53 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:53 np0005592159 nova_compute[226433]: 2026-01-22 14:25:53.534 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:53 np0005592159 nova_compute[226433]: 2026-01-22 14:25:53.535 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:25:53 np0005592159 nova_compute[226433]: 2026-01-22 14:25:53.535 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:25:53 np0005592159 nova_compute[226433]: 2026-01-22 14:25:53.567 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:25:53 np0005592159 nova_compute[226433]: 2026-01-22 14:25:53.567 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:25:53 np0005592159 nova_compute[226433]: 2026-01-22 14:25:53.568 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:25:53 np0005592159 nova_compute[226433]: 2026-01-22 14:25:53.568 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:25:53 np0005592159 nova_compute[226433]: 2026-01-22 14:25:53.568 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:25:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:53.581+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:53 np0005592159 nova_compute[226433]: 2026-01-22 14:25:53.821 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:25:53 np0005592159 nova_compute[226433]: 2026-01-22 14:25:53.822 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:25:53 np0005592159 nova_compute[226433]: 2026-01-22 14:25:53.822 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:25:53 np0005592159 nova_compute[226433]: 2026-01-22 14:25:53.822 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:25:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:54 np0005592159 nova_compute[226433]: 2026-01-22 14:25:54.076 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:25:54 np0005592159 nova_compute[226433]: 2026-01-22 14:25:54.228 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:54.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:54 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:25:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:54.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:25:54 np0005592159 nova_compute[226433]: 2026-01-22 14:25:54.515 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:25:54 np0005592159 nova_compute[226433]: 2026-01-22 14:25:54.547 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:25:54 np0005592159 nova_compute[226433]: 2026-01-22 14:25:54.547 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:25:54 np0005592159 nova_compute[226433]: 2026-01-22 14:25:54.548 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:54.558+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:54 np0005592159 nova_compute[226433]: 2026-01-22 14:25:54.595 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:25:54 np0005592159 nova_compute[226433]: 2026-01-22 14:25:54.595 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:25:54 np0005592159 nova_compute[226433]: 2026-01-22 14:25:54.596 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:25:54 np0005592159 nova_compute[226433]: 2026-01-22 14:25:54.596 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:25:54 np0005592159 nova_compute[226433]: 2026-01-22 14:25:54.597 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:25:54 np0005592159 nova_compute[226433]: 2026-01-22 14:25:54.806 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:25:55 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2696841303' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.093 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.211 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.211 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.218 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.218 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.375 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.376 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4486MB free_disk=20.750900268554688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.376 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.376 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:25:55 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:55.517+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.874 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.875 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.875 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.875 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.875 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.876 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.876 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:25:55 np0005592159 nova_compute[226433]: 2026-01-22 14:25:55.876 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:25:56 np0005592159 nova_compute[226433]: 2026-01-22 14:25:56.260 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:25:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:56.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:56 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:56.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:56.515+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:25:56 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1661484102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:25:56 np0005592159 nova_compute[226433]: 2026-01-22 14:25:56.727 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:25:56 np0005592159 nova_compute[226433]: 2026-01-22 14:25:56.733 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:25:56 np0005592159 nova_compute[226433]: 2026-01-22 14:25:56.773 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:25:56 np0005592159 nova_compute[226433]: 2026-01-22 14:25:56.776 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:25:56 np0005592159 nova_compute[226433]: 2026-01-22 14:25:56.776 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:25:57 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:25:57 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:57.466+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:57 np0005592159 nova_compute[226433]: 2026-01-22 14:25:57.744 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:57 np0005592159 nova_compute[226433]: 2026-01-22 14:25:57.745 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:25:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:25:58.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:25:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:25:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:25:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:25:58.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:25:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:58.499+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:58 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:25:59 np0005592159 nova_compute[226433]: 2026-01-22 14:25:59.233 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:25:59.462+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:25:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:59 np0005592159 nova_compute[226433]: 2026-01-22 14:25:59.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:25:59 np0005592159 podman[246754]: 2026-01-22 14:25:59.619268153 +0000 UTC m=+0.151192443 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 09:25:59 np0005592159 nova_compute[226433]: 2026-01-22 14:25:59.807 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #97. Immutable memtables: 0.
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.830400) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 97
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959830437, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1441, "num_deletes": 251, "total_data_size": 2522195, "memory_usage": 2571048, "flush_reason": "Manual Compaction"}
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #98: started
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959841944, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 98, "file_size": 1644723, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49493, "largest_seqno": 50929, "table_properties": {"data_size": 1639072, "index_size": 2791, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14729, "raw_average_key_size": 20, "raw_value_size": 1626619, "raw_average_value_size": 2303, "num_data_blocks": 120, "num_entries": 706, "num_filter_entries": 706, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091873, "oldest_key_time": 1769091873, "file_creation_time": 1769091959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 11592 microseconds, and 4290 cpu microseconds.
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.841988) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #98: 1644723 bytes OK
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.842008) [db/memtable_list.cc:519] [default] Level-0 commit table #98 started
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.843996) [db/memtable_list.cc:722] [default] Level-0 commit table #98: memtable #1 done
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.844041) EVENT_LOG_v1 {"time_micros": 1769091959844032, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.844063) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2515301, prev total WAL file size 2515301, number of live WAL files 2.
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000094.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.845081) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [98(1606KB)], [96(8464KB)]
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959845211, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [98], "files_L6": [96], "score": -1, "input_data_size": 10312141, "oldest_snapshot_seqno": -1}
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #99: 9284 keys, 8614036 bytes, temperature: kUnknown
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959897610, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 99, "file_size": 8614036, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8562638, "index_size": 27094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23237, "raw_key_size": 249886, "raw_average_key_size": 26, "raw_value_size": 8403673, "raw_average_value_size": 905, "num_data_blocks": 1020, "num_entries": 9284, "num_filter_entries": 9284, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769091959, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 99, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.897830) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8614036 bytes
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.899361) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.6 rd, 164.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 8.3 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(11.5) write-amplify(5.2) OK, records in: 9801, records dropped: 517 output_compression: NoCompression
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.899377) EVENT_LOG_v1 {"time_micros": 1769091959899369, "job": 60, "event": "compaction_finished", "compaction_time_micros": 52444, "compaction_time_cpu_micros": 25488, "output_level": 6, "num_output_files": 1, "total_output_size": 8614036, "num_input_records": 9801, "num_output_records": 9284, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959899871, "job": 60, "event": "table_file_deletion", "file_number": 98}
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000096.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769091959901194, "job": 60, "event": "table_file_deletion", "file_number": 96}
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.844835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.901299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.901306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.901342) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.901345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:25:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:25:59.901347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:26:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:00.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:00.417+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:26:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:00.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:26:00 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:26:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:26:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:26:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:01.375+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:01 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:02.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:02.346+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:02.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:02 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:03.315+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:03 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:03 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2952 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:04 np0005592159 nova_compute[226433]: 2026-01-22 14:26:04.284 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:04.321+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:04.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:04.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:04 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:04 np0005592159 nova_compute[226433]: 2026-01-22 14:26:04.809 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:05.334+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:05 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:05 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:26:05 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:26:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:06.337+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:06.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:06.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:06 np0005592159 nova_compute[226433]: 2026-01-22 14:26:06.530 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:06 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:07.327+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:07 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:08.295+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:26:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:08.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:26:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:08.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:08 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:09 np0005592159 nova_compute[226433]: 2026-01-22 14:26:09.286 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:09.337+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:09 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:09 np0005592159 nova_compute[226433]: 2026-01-22 14:26:09.811 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:10.342+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:10.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:10.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:10 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:11.369+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:11 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:26:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:12.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:26:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:12.382+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:12.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:12 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:12 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:12 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:13.392+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:13 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:14 np0005592159 nova_compute[226433]: 2026-01-22 14:26:14.289 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:14.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:14.366+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:14.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:14 np0005592159 nova_compute[226433]: 2026-01-22 14:26:14.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:14 np0005592159 nova_compute[226433]: 2026-01-22 14:26:14.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 09:26:14 np0005592159 nova_compute[226433]: 2026-01-22 14:26:14.813 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:14 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:15.338+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:15 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:16.351+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:16.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:16.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:16 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:17.377+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:17 np0005592159 podman[246889]: 2026-01-22 14:26:17.993715811 +0000 UTC m=+0.054314774 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 09:26:18 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2967 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:18 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:26:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655468272' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:26:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:26:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655468272' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:26:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:18.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:18.381+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:26:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:18.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:26:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:19 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:19 np0005592159 nova_compute[226433]: 2026-01-22 14:26:19.293 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:19.340+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:19 np0005592159 nova_compute[226433]: 2026-01-22 14:26:19.815 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:20 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:20.333+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:20.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:26:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:20.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:26:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:21.293+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:21 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:22 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:22.325+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:22.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:22.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:22 np0005592159 nova_compute[226433]: 2026-01-22 14:26:22.955 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:22 np0005592159 nova_compute[226433]: 2026-01-22 14:26:22.987 226437 WARNING nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] While synchronizing instance power states, found 6 instances in the database and 2 instances on the hypervisor.#033[00m
Jan 22 09:26:22 np0005592159 nova_compute[226433]: 2026-01-22 14:26:22.987 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Sync already in progress for e0e74330-96df-479f-8baf-53fbd2ccba91 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10266#033[00m
Jan 22 09:26:22 np0005592159 nova_compute[226433]: 2026-01-22 14:26:22.987 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Sync already in progress for f591d61b-712e-49aa-85bd-8d222b607eb3 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10266#033[00m
Jan 22 09:26:22 np0005592159 nova_compute[226433]: 2026-01-22 14:26:22.987 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Sync already in progress for 87e798e6-6f00-4fe1-8412-75ddc9e2878e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10266#033[00m
Jan 22 09:26:22 np0005592159 nova_compute[226433]: 2026-01-22 14:26:22.988 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Triggering sync for uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 22 09:26:22 np0005592159 nova_compute[226433]: 2026-01-22 14:26:22.988 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Triggering sync for uuid 8331b067-1b3f-4a1d-a596-e966f6de776a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 22 09:26:22 np0005592159 nova_compute[226433]: 2026-01-22 14:26:22.988 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Triggering sync for uuid a0b3924b-4422-47c5-ba40-748e41b14d00 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Jan 22 09:26:22 np0005592159 nova_compute[226433]: 2026-01-22 14:26:22.988 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "8e98e700-52a4-44ff-8e11-9404cd11d871" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:26:22 np0005592159 nova_compute[226433]: 2026-01-22 14:26:22.989 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:26:22 np0005592159 nova_compute[226433]: 2026-01-22 14:26:22.990 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "8331b067-1b3f-4a1d-a596-e966f6de776a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:26:22 np0005592159 nova_compute[226433]: 2026-01-22 14:26:22.990 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "a0b3924b-4422-47c5-ba40-748e41b14d00" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:26:23 np0005592159 nova_compute[226433]: 2026-01-22 14:26:23.019 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:26:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:23.288+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:23 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:24 np0005592159 nova_compute[226433]: 2026-01-22 14:26:24.297 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:24.317+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:24.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:24 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:24.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:24 np0005592159 nova_compute[226433]: 2026-01-22 14:26:24.818 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:25.294+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:25 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:26.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:26.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:26 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:27.301+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:27 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:27 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2977 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:28.280+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:28.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:28.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:29 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:29.289+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:29 np0005592159 nova_compute[226433]: 2026-01-22 14:26:29.301 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:29 np0005592159 nova_compute[226433]: 2026-01-22 14:26:29.866 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:30 np0005592159 podman[246914]: 2026-01-22 14:26:30.048001528 +0000 UTC m=+0.096348476 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 09:26:30 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:30 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:30.246+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:30.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:26:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:30.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:26:31 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:26:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.5 total, 600.0 interval#012Cumulative writes: 7904 writes, 30K keys, 7904 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7904 writes, 1924 syncs, 4.11 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 927 writes, 2915 keys, 927 commit groups, 1.0 writes per commit group, ingest: 2.54 MB, 0.00 MB/s#012Interval WAL: 927 writes, 373 syncs, 2.49 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:26:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:31.245+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:32 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:32.229+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:32.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:32.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:33 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2982 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:33 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:33.213+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:34 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:34.255+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:34 np0005592159 nova_compute[226433]: 2026-01-22 14:26:34.303 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:34.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000055s ======
Jan 22 09:26:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:34.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000055s
Jan 22 09:26:34 np0005592159 nova_compute[226433]: 2026-01-22 14:26:34.868 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:35.212+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:35 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:36.199+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:36 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:26:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:36.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:26:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:36.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:37.236+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:37 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:38.283+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:38 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:38 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:38.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:38.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:39.283+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:39 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:39 np0005592159 nova_compute[226433]: 2026-01-22 14:26:39.305 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:39 np0005592159 nova_compute[226433]: 2026-01-22 14:26:39.870 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:40.307+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:40 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:40.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:40.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:41.299+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:41 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:42.293+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:42 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:42.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:42.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:43 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:43 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:43.329+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:44 np0005592159 nova_compute[226433]: 2026-01-22 14:26:44.309 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:44 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:44.372+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:44.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:44.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:44 np0005592159 nova_compute[226433]: 2026-01-22 14:26:44.873 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:45.349+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:45 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:46.377+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:46.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:46 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:46.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:26:47.200 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:26:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:26:47.201 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:26:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:26:47.201 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:26:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:47.389+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:47 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:47 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 2997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:48.340+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:48.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:48.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:48 np0005592159 nova_compute[226433]: 2026-01-22 14:26:48.551 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:48 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:48 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:48 np0005592159 podman[247001]: 2026-01-22 14:26:48.98620666 +0000 UTC m=+0.050931778 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_managed=true)
Jan 22 09:26:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:49.494+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:49 np0005592159 nova_compute[226433]: 2026-01-22 14:26:49.497 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:49 np0005592159 nova_compute[226433]: 2026-01-22 14:26:49.874 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:49 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:50.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:50.453+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:50 np0005592159 nova_compute[226433]: 2026-01-22 14:26:50.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:50 np0005592159 nova_compute[226433]: 2026-01-22 14:26:50.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:50 np0005592159 nova_compute[226433]: 2026-01-22 14:26:50.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:50.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:51 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:51.419+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:52 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:52.387+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:52.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:52.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:53 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:53 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:53.415+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:53 np0005592159 nova_compute[226433]: 2026-01-22 14:26:53.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:53 np0005592159 nova_compute[226433]: 2026-01-22 14:26:53.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:26:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:54 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:54.399+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:54.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:54 np0005592159 nova_compute[226433]: 2026-01-22 14:26:54.500 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:54.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:54 np0005592159 nova_compute[226433]: 2026-01-22 14:26:54.876 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:55 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:55.415+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:55 np0005592159 nova_compute[226433]: 2026-01-22 14:26:55.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:55 np0005592159 nova_compute[226433]: 2026-01-22 14:26:55.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:26:55 np0005592159 nova_compute[226433]: 2026-01-22 14:26:55.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:26:55 np0005592159 nova_compute[226433]: 2026-01-22 14:26:55.537 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:26:55 np0005592159 nova_compute[226433]: 2026-01-22 14:26:55.537 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:26:55 np0005592159 nova_compute[226433]: 2026-01-22 14:26:55.538 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:26:55 np0005592159 nova_compute[226433]: 2026-01-22 14:26:55.538 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:26:55 np0005592159 nova_compute[226433]: 2026-01-22 14:26:55.538 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:26:55 np0005592159 nova_compute[226433]: 2026-01-22 14:26:55.958 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:26:55 np0005592159 nova_compute[226433]: 2026-01-22 14:26:55.959 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:26:55 np0005592159 nova_compute[226433]: 2026-01-22 14:26:55.959 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:26:55 np0005592159 nova_compute[226433]: 2026-01-22 14:26:55.959 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:26:56 np0005592159 nova_compute[226433]: 2026-01-22 14:26:56.147 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:26:56 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:56.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:56.435+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:56.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:56 np0005592159 nova_compute[226433]: 2026-01-22 14:26:56.571 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:26:56 np0005592159 nova_compute[226433]: 2026-01-22 14:26:56.587 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:26:56 np0005592159 nova_compute[226433]: 2026-01-22 14:26:56.587 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:26:56 np0005592159 nova_compute[226433]: 2026-01-22 14:26:56.587 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:56 np0005592159 nova_compute[226433]: 2026-01-22 14:26:56.588 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:56 np0005592159 nova_compute[226433]: 2026-01-22 14:26:56.613 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:26:56 np0005592159 nova_compute[226433]: 2026-01-22 14:26:56.613 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:26:56 np0005592159 nova_compute[226433]: 2026-01-22 14:26:56.613 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:26:56 np0005592159 nova_compute[226433]: 2026-01-22 14:26:56.614 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:26:56 np0005592159 nova_compute[226433]: 2026-01-22 14:26:56.614 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:26:57 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:26:57 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/259508654' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.091 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.171 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.172 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.175 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.176 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.299 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.300 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4455MB free_disk=20.750900268554688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.300 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.300 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:26:57 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:57 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.389 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.390 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.391 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.410 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing inventories for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.425 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating ProviderTree inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.425 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.442 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing aggregate associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.463 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing trait associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 09:26:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:57.468+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:57 np0005592159 nova_compute[226433]: 2026-01-22 14:26:57.866 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:26:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:26:58 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/815054871' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:26:58 np0005592159 nova_compute[226433]: 2026-01-22 14:26:58.282 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:26:58 np0005592159 nova_compute[226433]: 2026-01-22 14:26:58.287 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:26:58 np0005592159 nova_compute[226433]: 2026-01-22 14:26:58.305 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:26:58 np0005592159 nova_compute[226433]: 2026-01-22 14:26:58.307 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:26:58 np0005592159 nova_compute[226433]: 2026-01-22 14:26:58.307 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:26:58 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:26:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:26:58.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:26:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:58.473+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:26:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:26:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:26:58.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:26:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:26:59 np0005592159 nova_compute[226433]: 2026-01-22 14:26:59.236 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:26:59 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:26:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:26:59.458+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:26:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:26:59 np0005592159 nova_compute[226433]: 2026-01-22 14:26:59.548 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:26:59 np0005592159 nova_compute[226433]: 2026-01-22 14:26:59.878 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:00.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:00 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 09:27:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:00.454+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:00.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:01 np0005592159 podman[247122]: 2026-01-22 14:27:01.005224629 +0000 UTC m=+0.071438536 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 22 09:27:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:01.411+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:01 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:02.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:02.420+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:02 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:02.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:03.465+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:03 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:03 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 3012 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:04.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:04.434+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:04 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:04.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:04 np0005592159 nova_compute[226433]: 2026-01-22 14:27:04.550 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:04 np0005592159 nova_compute[226433]: 2026-01-22 14:27:04.879 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:05.479+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:05 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:06.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:06.482+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:06 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:06 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:06 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:06.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #100. Immutable memtables: 0.
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.371522) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 100
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027371595, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 1165, "num_deletes": 256, "total_data_size": 1971498, "memory_usage": 1997016, "flush_reason": "Manual Compaction"}
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #101: started
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027382964, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 101, "file_size": 1294814, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50934, "largest_seqno": 52094, "table_properties": {"data_size": 1289998, "index_size": 2212, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12371, "raw_average_key_size": 20, "raw_value_size": 1279428, "raw_average_value_size": 2100, "num_data_blocks": 95, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769091960, "oldest_key_time": 1769091960, "file_creation_time": 1769092027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 11485 microseconds, and 4058 cpu microseconds.
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.383015) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #101: 1294814 bytes OK
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.383034) [db/memtable_list.cc:519] [default] Level-0 commit table #101 started
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.385171) [db/memtable_list.cc:722] [default] Level-0 commit table #101: memtable #1 done
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.385188) EVENT_LOG_v1 {"time_micros": 1769092027385183, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.385205) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 1965703, prev total WAL file size 1965703, number of live WAL files 2.
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000097.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.385956) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303039' seq:72057594037927935, type:22 .. '6C6F676D0032323631' seq:0, type:0; will stop at (end)
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [101(1264KB)], [99(8412KB)]
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027385991, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [101], "files_L6": [99], "score": -1, "input_data_size": 9908850, "oldest_snapshot_seqno": -1}
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #102: 9366 keys, 9739638 bytes, temperature: kUnknown
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027444567, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 102, "file_size": 9739638, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9686583, "index_size": 28559, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23429, "raw_key_size": 252970, "raw_average_key_size": 27, "raw_value_size": 9524930, "raw_average_value_size": 1016, "num_data_blocks": 1078, "num_entries": 9366, "num_filter_entries": 9366, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092027, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 102, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.444826) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 9739638 bytes
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.448817) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.9 rd, 166.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.2 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(15.2) write-amplify(7.5) OK, records in: 9893, records dropped: 527 output_compression: NoCompression
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.448838) EVENT_LOG_v1 {"time_micros": 1769092027448828, "job": 62, "event": "compaction_finished", "compaction_time_micros": 58657, "compaction_time_cpu_micros": 23777, "output_level": 6, "num_output_files": 1, "total_output_size": 9739638, "num_input_records": 9893, "num_output_records": 9366, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027449178, "job": 62, "event": "table_file_deletion", "file_number": 101}
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000099.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092027450998, "job": 62, "event": "table_file_deletion", "file_number": 99}
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.385865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.451106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.451112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.451115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.451116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:27:07.451118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:27:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:07.475+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:27:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:08.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:08.447+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:08.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:08 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:09.446+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:09 np0005592159 nova_compute[226433]: 2026-01-22 14:27:09.554 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:09 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:09 np0005592159 nova_compute[226433]: 2026-01-22 14:27:09.881 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:10.400+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:10.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:10.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:10 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:11.353+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:11 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:12.316+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:12.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:27:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:12.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:27:12 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:12 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3017 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:13.291+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:14 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:27:14 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:14.312+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:14.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:14.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:14 np0005592159 nova_compute[226433]: 2026-01-22 14:27:14.598 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:14 np0005592159 nova_compute[226433]: 2026-01-22 14:27:14.883 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:15 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:15.330+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:16.367+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:16.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:16 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:16.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:17.412+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:17 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:17 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3027 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:18.380+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:18.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:18 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:18.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:19.349+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:19 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:19 np0005592159 nova_compute[226433]: 2026-01-22 14:27:19.600 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:19 np0005592159 nova_compute[226433]: 2026-01-22 14:27:19.885 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:20 np0005592159 podman[247388]: 2026-01-22 14:27:20.012102613 +0000 UTC m=+0.061269429 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 09:27:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:20.365+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:27:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:20.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:27:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:20.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:20 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:21.338+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:22 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:22 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:22.346+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:22.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:22.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:23 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:23 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:23.323+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:24 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:24.366+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:24.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:24.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:24 np0005592159 nova_compute[226433]: 2026-01-22 14:27:24.604 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:24 np0005592159 nova_compute[226433]: 2026-01-22 14:27:24.887 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:25 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:25.325+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:25 np0005592159 nova_compute[226433]: 2026-01-22 14:27:25.473 226437 DEBUG oslo_concurrency.lockutils [None req-ba0e4a49-0b53-46e9-80a4-11bd4e6c0b83 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "8331b067-1b3f-4a1d-a596-e966f6de776a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:27:26 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:26.361+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:26.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:26.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:27 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:27.393+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:28 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3037 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:28 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:28.411+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:28.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:28.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:29 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:29.459+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:29 np0005592159 nova_compute[226433]: 2026-01-22 14:27:29.606 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:29 np0005592159 nova_compute[226433]: 2026-01-22 14:27:29.890 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:30 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:30.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:30.496+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:30.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:31 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:31.508+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:32 np0005592159 podman[247413]: 2026-01-22 14:27:32.022149455 +0000 UTC m=+0.086937168 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:27:32 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:32.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:32.483+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:32.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:33.533+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:33 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3042 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:33 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:34.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:34.576+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:34.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:34 np0005592159 nova_compute[226433]: 2026-01-22 14:27:34.610 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:34 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:34 np0005592159 nova_compute[226433]: 2026-01-22 14:27:34.893 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:35.588+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:35 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:36.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:36.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:36.598+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:36 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:36 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:37.626+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:37 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:38.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:27:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:38.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:27:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:38.649+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:38 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:39.661+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:39 np0005592159 nova_compute[226433]: 2026-01-22 14:27:39.662 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:39 np0005592159 nova_compute[226433]: 2026-01-22 14:27:39.894 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:40 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:40.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:40.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:40.689+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:41 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:41.658+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:42 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:42.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:42.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:42.709+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:43 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:43 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:43.738+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:44.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:44.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:44 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:44 np0005592159 nova_compute[226433]: 2026-01-22 14:27:44.666 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:44.758+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:44 np0005592159 nova_compute[226433]: 2026-01-22 14:27:44.896 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:45.752+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:45 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:46.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:46.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:46.741+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:46 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:46 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:27:47.202 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:27:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:27:47.202 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:27:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:27:47.202 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:27:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:47.703+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:48 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:48 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:48.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:48 np0005592159 nova_compute[226433]: 2026-01-22 14:27:48.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:48.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:48.743+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:49 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:49 np0005592159 nova_compute[226433]: 2026-01-22 14:27:49.671 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:49.775+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:27:49 np0005592159 nova_compute[226433]: 2026-01-22 14:27:49.939 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:50.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:50 np0005592159 nova_compute[226433]: 2026-01-22 14:27:50.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:50.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:50.804+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:51 np0005592159 podman[247501]: 2026-01-22 14:27:51.048543191 +0000 UTC m=+0.093332012 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 22 09:27:51 np0005592159 nova_compute[226433]: 2026-01-22 14:27:51.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:51 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:27:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:51.768+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:52.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:52 np0005592159 nova_compute[226433]: 2026-01-22 14:27:52.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:52.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:52.730+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:52 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:52 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:53.774+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:54 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:54 np0005592159 ceph-mon[77081]: Health check update: 25 slow ops, oldest one blocked for 3063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:27:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:54.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:54 np0005592159 nova_compute[226433]: 2026-01-22 14:27:54.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:54 np0005592159 nova_compute[226433]: 2026-01-22 14:27:54.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:27:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:54.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:54 np0005592159 nova_compute[226433]: 2026-01-22 14:27:54.675 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:54.790+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:54 np0005592159 nova_compute[226433]: 2026-01-22 14:27:54.939 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:55 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:55 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:55 np0005592159 nova_compute[226433]: 2026-01-22 14:27:55.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:55 np0005592159 nova_compute[226433]: 2026-01-22 14:27:55.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:27:55 np0005592159 nova_compute[226433]: 2026-01-22 14:27:55.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:27:55 np0005592159 nova_compute[226433]: 2026-01-22 14:27:55.543 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:27:55 np0005592159 nova_compute[226433]: 2026-01-22 14:27:55.543 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:27:55 np0005592159 nova_compute[226433]: 2026-01-22 14:27:55.543 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:27:55 np0005592159 nova_compute[226433]: 2026-01-22 14:27:55.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:27:55 np0005592159 nova_compute[226433]: 2026-01-22 14:27:55.544 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:27:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:55.800+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:55 np0005592159 nova_compute[226433]: 2026-01-22 14:27:55.990 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:27:55 np0005592159 nova_compute[226433]: 2026-01-22 14:27:55.990 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:27:55 np0005592159 nova_compute[226433]: 2026-01-22 14:27:55.990 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:27:55 np0005592159 nova_compute[226433]: 2026-01-22 14:27:55.990 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:27:56 np0005592159 nova_compute[226433]: 2026-01-22 14:27:56.248 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:27:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:56.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:56.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:56 np0005592159 nova_compute[226433]: 2026-01-22 14:27:56.700 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:27:56 np0005592159 nova_compute[226433]: 2026-01-22 14:27:56.714 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:27:56 np0005592159 nova_compute[226433]: 2026-01-22 14:27:56.715 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:27:56 np0005592159 nova_compute[226433]: 2026-01-22 14:27:56.716 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:56 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:56.827+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:57.804+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:57 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:27:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:27:58.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:27:58 np0005592159 nova_compute[226433]: 2026-01-22 14:27:58.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:27:58 np0005592159 nova_compute[226433]: 2026-01-22 14:27:58.536 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:27:58 np0005592159 nova_compute[226433]: 2026-01-22 14:27:58.537 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:27:58 np0005592159 nova_compute[226433]: 2026-01-22 14:27:58.537 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:27:58 np0005592159 nova_compute[226433]: 2026-01-22 14:27:58.537 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:27:58 np0005592159 nova_compute[226433]: 2026-01-22 14:27:58.538 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:27:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:27:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:27:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:27:58.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:27:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:58.793+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:27:58 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3252584016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:27:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:27:58 np0005592159 nova_compute[226433]: 2026-01-22 14:27:58.965 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.038 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.038 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.041 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.041 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.216 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.217 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4472MB free_disk=20.77179718017578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.217 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.218 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:27:59 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.305 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.305 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.305 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.305 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.305 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.306 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.306 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.306 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.420 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.678 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:27:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:27:59.819+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:27:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:27:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:27:59 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/433983523' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.841 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.847 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.863 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.889 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.889 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:27:59 np0005592159 nova_compute[226433]: 2026-01-22 14:27:59.940 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:00 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:00 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:00.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:00.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:00.816+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:00 np0005592159 nova_compute[226433]: 2026-01-22 14:28:00.891 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:01 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:01.808+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:02.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:02.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:02 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:02 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:02.782+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:03 np0005592159 podman[247622]: 2026-01-22 14:28:03.037753482 +0000 UTC m=+0.094646028 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true)
Jan 22 09:28:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:03.740+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:04 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:04.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:28:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:04.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:28:04 np0005592159 nova_compute[226433]: 2026-01-22 14:28:04.681 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:04.732+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:04 np0005592159 nova_compute[226433]: 2026-01-22 14:28:04.942 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:05 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:05 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:05.768+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:06.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:06.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:06 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:06.812+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:07.860+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:08 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:08 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:08.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:08.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:08.811+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:09 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:09 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:09 np0005592159 nova_compute[226433]: 2026-01-22 14:28:09.685 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:09.824+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:09 np0005592159 nova_compute[226433]: 2026-01-22 14:28:09.944 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:10.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:10.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:10 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:10.844+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:11 np0005592159 nova_compute[226433]: 2026-01-22 14:28:11.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:11.885+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:12.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:12.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:12.911+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:13 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:13 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:13.862+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:14 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:14.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:14.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:14 np0005592159 nova_compute[226433]: 2026-01-22 14:28:14.689 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:14.823+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:14 np0005592159 podman[247949]: 2026-01-22 14:28:14.903425958 +0000 UTC m=+0.082725563 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:28:14 np0005592159 nova_compute[226433]: 2026-01-22 14:28:14.948 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:15 np0005592159 podman[247949]: 2026-01-22 14:28:15.003953245 +0000 UTC m=+0.183252840 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Jan 22 09:28:15 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:15 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:15 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:15 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:28:15 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:28:15 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:15 np0005592159 podman[248103]: 2026-01-22 14:28:15.663105774 +0000 UTC m=+0.060041496 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 09:28:15 np0005592159 podman[248103]: 2026-01-22 14:28:15.669999192 +0000 UTC m=+0.066934894 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 09:28:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:15.844+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:15 np0005592159 podman[248170]: 2026-01-22 14:28:15.860619522 +0000 UTC m=+0.050517736 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, version=2.2.4, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.openshift.tags=Ceph keepalived, architecture=x86_64, distribution-scope=public)
Jan 22 09:28:15 np0005592159 podman[248170]: 2026-01-22 14:28:15.874719756 +0000 UTC m=+0.064617970 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, build-date=2023-02-22T09:23:20, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, distribution-scope=public, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, release=1793, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2)
Jan 22 09:28:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:16.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:16.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:16.877+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:16 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:28:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:28:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:17.830+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:18 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:18 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:18 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:18.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:18.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:18.877+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:19 np0005592159 nova_compute[226433]: 2026-01-22 14:28:19.691 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:19.855+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:19 np0005592159 nova_compute[226433]: 2026-01-22 14:28:19.996 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:20.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:20.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:20.860+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:20 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:21.910+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:22 np0005592159 podman[248388]: 2026-01-22 14:28:22.025191387 +0000 UTC m=+0.073468791 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 22 09:28:22 np0005592159 nova_compute[226433]: 2026-01-22 14:28:22.088 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:22 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:22.088 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:28:22 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:22.089 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:28:22 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:22.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:22.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:22.883+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:23 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:23 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:23 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:23.885+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:24.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:24 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:24.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:24 np0005592159 nova_compute[226433]: 2026-01-22 14:28:24.695 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:24.913+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:25 np0005592159 nova_compute[226433]: 2026-01-22 14:28:24.999 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:25 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:25 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:25 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:28:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:25.920+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:26.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:26 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:26.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:26.915+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:27 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:27.938+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:28.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:28 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:28.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:28.915+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:29 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:29.090 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:29 np0005592159 nova_compute[226433]: 2026-01-22 14:28:29.699 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:29 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:29.904+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:30 np0005592159 nova_compute[226433]: 2026-01-22 14:28:30.002 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:30.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:30.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:30 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:30.932+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:31 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:31.931+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:32.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:32.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:32 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:32 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:32.961+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:33 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:33 np0005592159 nova_compute[226433]: 2026-01-22 14:28:33.993 226437 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Creating tmpfile /var/lib/nova/instances/tmpbphf1dve to notify to other compute nodes that they should mount the same storage. _create_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10041#033[00m
Jan 22 09:28:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:33.997+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:34 np0005592159 podman[248463]: 2026-01-22 14:28:34.043369085 +0000 UTC m=+0.097314801 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 22 09:28:34 np0005592159 nova_compute[226433]: 2026-01-22 14:28:34.109 226437 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] destination check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbphf1dve',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path=<?>,is_shared_block_storage=<?>,is_shared_instance_path=<?>,is_volume_backed=<?>,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_destination /usr/lib/python3.9/site-packages/nova/compute/manager.py:8476#033[00m
Jan 22 09:28:34 np0005592159 nova_compute[226433]: 2026-01-22 14:28:34.135 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:28:34 np0005592159 nova_compute[226433]: 2026-01-22 14:28:34.135 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquired lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:28:34 np0005592159 nova_compute[226433]: 2026-01-22 14:28:34.143 226437 INFO nova.compute.rpcapi [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Automatically selected compute RPC version 6.2 from minimum service version 66#033[00m
Jan 22 09:28:34 np0005592159 nova_compute[226433]: 2026-01-22 14:28:34.143 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Releasing lock "compute-rpcapi-router" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:28:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:34.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:34.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:34 np0005592159 nova_compute[226433]: 2026-01-22 14:28:34.701 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:34.994+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:35 np0005592159 nova_compute[226433]: 2026-01-22 14:28:35.003 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:35 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:35 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:35 np0005592159 nova_compute[226433]: 2026-01-22 14:28:35.961 226437 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] pre_live_migration data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbphf1dve',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8604#033[00m
Jan 22 09:28:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:35.987+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:35 np0005592159 nova_compute[226433]: 2026-01-22 14:28:35.995 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:28:35 np0005592159 nova_compute[226433]: 2026-01-22 14:28:35.996 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquired lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:28:35 np0005592159 nova_compute[226433]: 2026-01-22 14:28:35.996 226437 DEBUG nova.network.neutron [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:28:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:28:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:36.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:28:36 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:36.942+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.055 226437 DEBUG nova.network.neutron [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating instance_info_cache with network_info: [{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.080 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Releasing lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.082 226437 DEBUG os_brick.utils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.083 226437 INFO oslo.privsep.daemon [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpk2q2e022/privsep.sock']#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.777 226437 INFO oslo.privsep.daemon [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.642 248518 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.645 248518 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.647 248518 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.647 248518 INFO oslo.privsep.daemon [-] privsep daemon running as pid 248518#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.781 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[b51b87aa-e072-4de2-a51e-a1a2d8671e38]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:37 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:37 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.872 248518 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.885 248518 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.885 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[ec379100-8078-4500-9648-b963dd59b562]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.887 248518 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.894 248518 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.894 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[56bf0476-a2ff-4cac-b5b8-4ca30389adfb]: (4, ('InitiatorName=iqn.1994-05.com.redhat:5333c49f4ca5', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.896 248518 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.904 248518 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.905 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[53037e6e-4382-4ca6-bb8c-73ef0e919028]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.907 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[320d483d-1614-470a-a218-7b9a3db44691]: (4, '5492a354-d192-4c48-8602-99be1884b049') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.907 226437 DEBUG oslo_concurrency.processutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.927 226437 DEBUG oslo_concurrency.processutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.930 226437 DEBUG os_brick.initiator.connectors.lightos [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.931 226437 DEBUG os_brick.initiator.connectors.lightos [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.931 226437 DEBUG os_brick.initiator.connectors.lightos [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 22 09:28:37 np0005592159 nova_compute[226433]: 2026-01-22 14:28:37.931 226437 DEBUG os_brick.utils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] <== get_connector_properties: return (849ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:5333c49f4ca5', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '5492a354-d192-4c48-8602-99be1884b049', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 22 09:28:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:37.979+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:38.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:38.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:28:38 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2342146323' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:28:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:39.002+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:39 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.257 226437 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] migrate_data in pre_live_migration: LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbphf1dve',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={6e173a8e-fd98-4de4-a470-2c50f67a6d48='d5a14597-bdb5-4f11-9e87-410238b00d48'},serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10827#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.258 226437 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Creating instance directory: /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10840#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.258 226437 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Ensure instance console log exists: /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.258 226437 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Connecting volumes before live migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10901#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.258 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.259 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.260 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:39 np0005592159 systemd[1]: Starting libvirt secret daemon...
Jan 22 09:28:39 np0005592159 systemd[1]: Started libvirt secret daemon.
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.317 226437 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Plugging VIFs using destination host port bindings before live migration. _pre_live_migration_plug_vifs /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10794#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.318 226437 DEBUG nova.virt.libvirt.vif [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T14:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1735692043',display_name='tempest-LiveMigrationTest-server-1735692043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1735692043',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-22T14:28:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b4b5b635cbf4888966d80692b78281f',ramdisk_id='',reservation_id='r-gogvl9kh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1708062570',owner_user_name='tempest-LiveMigrationTest-1708062570-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:28:29Z,user_data=None,user_id='32df6d966d7540dd851bf51a1148be65',uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.319 226437 DEBUG nova.network.os_vif_util [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converting VIF {"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.320 226437 DEBUG nova.network.os_vif_util [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.320 226437 DEBUG os_vif [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.321 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.321 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.322 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.324 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.325 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2b1b16d5-1e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.325 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2b1b16d5-1e, col_values=(('external_ids', {'iface-id': '2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:af:b6', 'vm-uuid': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.327 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:39 np0005592159 NetworkManager[49000]: <info>  [1769092119.3280] manager: (tap2b1b16d5-1e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.330 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.336 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.337 226437 INFO os_vif [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e')#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.340 226437 DEBUG nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] No dst_numa_info in migrate_data, no cores to power up in pre_live_migration. pre_live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10954#033[00m
Jan 22 09:28:39 np0005592159 nova_compute[226433]: 2026-01-22 14:28:39.340 226437 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] driver pre_live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbphf1dve',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={6e173a8e-fd98-4de4-a470-2c50f67a6d48='d5a14597-bdb5-4f11-9e87-410238b00d48'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8668#033[00m
Jan 22 09:28:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:39.963+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:40 np0005592159 nova_compute[226433]: 2026-01-22 14:28:40.005 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:40 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:40 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:40.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:28:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:40.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:28:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:40.932+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:41 np0005592159 nova_compute[226433]: 2026-01-22 14:28:41.578 226437 DEBUG nova.network.neutron [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b updated with migration profile {'migrating_to': 'compute-2.ctlplane.example.com'} successfully _setup_migration_port_profile /usr/lib/python3.9/site-packages/nova/network/neutron.py:354#033[00m
Jan 22 09:28:41 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:41 np0005592159 nova_compute[226433]: 2026-01-22 14:28:41.816 226437 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] pre_live_migration result data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpbphf1dve',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids={6e173a8e-fd98-4de4-a470-2c50f67a6d48='d5a14597-bdb5-4f11-9e87-410238b00d48'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) pre_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8723#033[00m
Jan 22 09:28:41 np0005592159 systemd[1]: Starting libvirt proxy daemon...
Jan 22 09:28:41 np0005592159 systemd[1]: Started libvirt proxy daemon.
Jan 22 09:28:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:41.976+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:42 np0005592159 kernel: tap2b1b16d5-1e: entered promiscuous mode
Jan 22 09:28:42 np0005592159 NetworkManager[49000]: <info>  [1769092122.0982] manager: (tap2b1b16d5-1e): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Jan 22 09:28:42 np0005592159 ovn_controller[133156]: 2026-01-22T14:28:42Z|00045|binding|INFO|Claiming lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for this additional chassis.
Jan 22 09:28:42 np0005592159 ovn_controller[133156]: 2026-01-22T14:28:42Z|00046|binding|INFO|2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b: Claiming fa:16:3e:f9:af:b6 10.100.0.3
Jan 22 09:28:42 np0005592159 nova_compute[226433]: 2026-01-22 14:28:42.098 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:42 np0005592159 systemd-udevd[248612]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 09:28:42 np0005592159 systemd-machined[194970]: New machine qemu-4-instance-00000012.
Jan 22 09:28:42 np0005592159 NetworkManager[49000]: <info>  [1769092122.1402] device (tap2b1b16d5-1e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 09:28:42 np0005592159 NetworkManager[49000]: <info>  [1769092122.1408] device (tap2b1b16d5-1e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 09:28:42 np0005592159 systemd[1]: Started Virtual Machine qemu-4-instance-00000012.
Jan 22 09:28:42 np0005592159 nova_compute[226433]: 2026-01-22 14:28:42.163 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:42 np0005592159 ovn_controller[133156]: 2026-01-22T14:28:42Z|00047|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b ovn-installed in OVS
Jan 22 09:28:42 np0005592159 nova_compute[226433]: 2026-01-22 14:28:42.175 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:42.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:42 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:42 np0005592159 nova_compute[226433]: 2026-01-22 14:28:42.632 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092122.631511, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:28:42 np0005592159 nova_compute[226433]: 2026-01-22 14:28:42.633 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Started (Lifecycle Event)#033[00m
Jan 22 09:28:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:42.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:42 np0005592159 nova_compute[226433]: 2026-01-22 14:28:42.661 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:28:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:42.967+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:43 np0005592159 nova_compute[226433]: 2026-01-22 14:28:43.167 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092123.166783, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:28:43 np0005592159 nova_compute[226433]: 2026-01-22 14:28:43.167 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Resumed (Lifecycle Event)#033[00m
Jan 22 09:28:43 np0005592159 nova_compute[226433]: 2026-01-22 14:28:43.193 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:28:43 np0005592159 nova_compute[226433]: 2026-01-22 14:28:43.197 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:28:43 np0005592159 nova_compute[226433]: 2026-01-22 14:28:43.215 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] During the sync_power process the instance has moved from host compute-0.ctlplane.example.com to host compute-2.ctlplane.example.com#033[00m
Jan 22 09:28:43 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:43 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:43.947+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:44 np0005592159 nova_compute[226433]: 2026-01-22 14:28:44.329 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:44.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:44.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:44 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:44.935+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:45 np0005592159 nova_compute[226433]: 2026-01-22 14:28:45.008 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:45 np0005592159 ovn_controller[133156]: 2026-01-22T14:28:45Z|00048|binding|INFO|Claiming lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for this chassis.
Jan 22 09:28:45 np0005592159 ovn_controller[133156]: 2026-01-22T14:28:45Z|00049|binding|INFO|2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b: Claiming fa:16:3e:f9:af:b6 10.100.0.3
Jan 22 09:28:45 np0005592159 ovn_controller[133156]: 2026-01-22T14:28:45Z|00050|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b up in Southbound
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.577 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:af:b6 10.100.0.3'], port_security=['fa:16:3e:f9:af:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b4b5b635cbf4888966d80692b78281f', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'eb69c488-c37b-4857-8e13-8b621218738b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64b04a22-643c-4588-a6a6-158f6179c5fc, chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b) old=Port_Binding(up=[False], additional_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.579 143497 INFO neutron.agent.ovn.metadata.agent [-] Port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b in datapath b247a422-e88b-4d6e-9b42-d4947ce89ea4 bound to our chassis#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.582 143497 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b247a422-e88b-4d6e-9b42-d4947ce89ea4#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.594 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[52066b1a-6fe9-4c18-aab6-58b6914c6b87]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.595 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb247a422-e1 in ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.597 237689 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb247a422-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.597 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[3ba2a967-91b3-4074-a876-42b8c3d97eea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.598 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[a81b6797-390e-415b-830e-cf2ec51a40cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.618 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[0e897fb3-d4b9-419c-880d-20c0615f6216]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.648 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb8385b-ca4c-4a1e-b13e-d303a1cb377b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.683 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[32b09f63-cf0a-4542-bde8-1a4dd0492854]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.690 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc08761-34c3-4c4a-b716-498dce99599b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 NetworkManager[49000]: <info>  [1769092125.6930] manager: (tapb247a422-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/32)
Jan 22 09:28:45 np0005592159 systemd-udevd[248680]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.725 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7edf9b-7a9f-46a7-8af6-0169f2165c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.728 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[7b4c5ec4-3f2a-4693-962c-faf8c46cae37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:45 np0005592159 NetworkManager[49000]: <info>  [1769092125.7563] device (tapb247a422-e0): carrier: link connected
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.759 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5f9275-e823-46ed-bd48-0badccf82158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.777 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd804b9-dc7b-430e-832e-4b2fb395e0b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb247a422-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:2b:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597646, 'reachable_time': 16968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248701, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.788 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[b949df6f-b966-4e30-997d-5bd9da7ed5b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:2b35'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 597646, 'tstamp': 597646}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248702, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.799 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[f21d3b10-a00f-41be-a188-bcd31b543473]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb247a422-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:2b:35'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597646, 'reachable_time': 16968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 248703, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.821 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[7c6e2d79-38b3-48f2-9864-27d138e4fa30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 nova_compute[226433]: 2026-01-22 14:28:45.845 226437 INFO nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Post operation of migration started#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.884 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[7d904f41-def3-429c-9d63-76fef3ab75af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.886 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb247a422-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.887 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.888 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb247a422-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:45.894+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:45 np0005592159 nova_compute[226433]: 2026-01-22 14:28:45.931 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:45 np0005592159 NetworkManager[49000]: <info>  [1769092125.9321] manager: (tapb247a422-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 22 09:28:45 np0005592159 kernel: tapb247a422-e0: entered promiscuous mode
Jan 22 09:28:45 np0005592159 nova_compute[226433]: 2026-01-22 14:28:45.937 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.938 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb247a422-e0, col_values=(('external_ids', {'iface-id': '9df913a6-89f7-4dbb-be1b-b1f6a67fcd4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:28:45 np0005592159 ovn_controller[133156]: 2026-01-22T14:28:45Z|00051|binding|INFO|Releasing lport 9df913a6-89f7-4dbb-be1b-b1f6a67fcd4a from this chassis (sb_readonly=0)
Jan 22 09:28:45 np0005592159 nova_compute[226433]: 2026-01-22 14:28:45.940 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:45 np0005592159 nova_compute[226433]: 2026-01-22 14:28:45.952 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:45 np0005592159 nova_compute[226433]: 2026-01-22 14:28:45.956 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.957 143497 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b247a422-e88b-4d6e-9b42-d4947ce89ea4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b247a422-e88b-4d6e-9b42-d4947ce89ea4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.958 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[77252f62-7361-48d5-8459-2b0a29b47f39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.958 143497 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: global
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    log         /dev/log local0 debug
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    log-tag     haproxy-metadata-proxy-b247a422-e88b-4d6e-9b42-d4947ce89ea4
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    user        root
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    group       root
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    maxconn     1024
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    pidfile     /var/lib/neutron/external/pids/b247a422-e88b-4d6e-9b42-d4947ce89ea4.pid.haproxy
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    daemon
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: defaults
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    log global
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    mode http
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    option httplog
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    option dontlognull
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    option http-server-close
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    option forwardfor
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    retries                 3
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    timeout http-request    30s
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    timeout connect         30s
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    timeout client          32s
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    timeout server          32s
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    timeout http-keep-alive 30s
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: listen listener
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    bind 169.254.169.254:80
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    server metadata /var/lib/neutron/metadata_proxy
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]:    http-request add-header X-OVN-Network-ID b247a422-e88b-4d6e-9b42-d4947ce89ea4
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 22 09:28:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:45.960 143497 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'env', 'PROCESS_TAG=haproxy-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b247a422-e88b-4d6e-9b42-d4947ce89ea4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 22 09:28:46 np0005592159 nova_compute[226433]: 2026-01-22 14:28:46.219 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:28:46 np0005592159 nova_compute[226433]: 2026-01-22 14:28:46.220 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquired lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:28:46 np0005592159 nova_compute[226433]: 2026-01-22 14:28:46.220 226437 DEBUG nova.network.neutron [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:28:46 np0005592159 podman[248738]: 2026-01-22 14:28:46.321593817 +0000 UTC m=+0.053846987 container create 3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 09:28:46 np0005592159 systemd[1]: Started libpod-conmon-3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b.scope.
Jan 22 09:28:46 np0005592159 podman[248738]: 2026-01-22 14:28:46.293886663 +0000 UTC m=+0.026139863 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 09:28:46 np0005592159 systemd[1]: Started libcrun container.
Jan 22 09:28:46 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30a4f7c6c9d491773a41a6ac99e5ad17b247e5c5f1025a81646d807b0889471c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 09:28:46 np0005592159 podman[248738]: 2026-01-22 14:28:46.414432985 +0000 UTC m=+0.146686175 container init 3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 09:28:46 np0005592159 podman[248738]: 2026-01-22 14:28:46.420787019 +0000 UTC m=+0.153040189 container start 3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:28:46 np0005592159 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[248753]: [NOTICE]   (248757) : New worker (248759) forked
Jan 22 09:28:46 np0005592159 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[248753]: [NOTICE]   (248757) : Loading success.
Jan 22 09:28:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:46.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:46.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:46 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:46.853+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:47.203 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:47.204 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:28:47.204 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:47 np0005592159 nova_compute[226433]: 2026-01-22 14:28:47.744 226437 DEBUG nova.network.neutron [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating instance_info_cache with network_info: [{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:28:47 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:47.841+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:48.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:48.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:48.868+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:48 np0005592159 nova_compute[226433]: 2026-01-22 14:28:48.911 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Releasing lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:28:48 np0005592159 nova_compute[226433]: 2026-01-22 14:28:48.938 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:28:48 np0005592159 nova_compute[226433]: 2026-01-22 14:28:48.938 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:28:48 np0005592159 nova_compute[226433]: 2026-01-22 14:28:48.938 226437 DEBUG oslo_concurrency.lockutils [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.allocate_pci_devices_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:28:48 np0005592159 nova_compute[226433]: 2026-01-22 14:28:48.943 226437 INFO nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Sending announce-self command to QEMU monitor. Attempt 1 of 3#033[00m
Jan 22 09:28:48 np0005592159 virtqemud[225907]: Domain id=4 name='instance-00000012' uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 is tainted: custom-monitor
Jan 22 09:28:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:49 np0005592159 nova_compute[226433]: 2026-01-22 14:28:49.331 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:49 np0005592159 nova_compute[226433]: 2026-01-22 14:28:49.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #103. Immutable memtables: 0.
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.645520) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 103
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129645623, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 1563, "num_deletes": 251, "total_data_size": 3045702, "memory_usage": 3089088, "flush_reason": "Manual Compaction"}
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #104: started
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129661701, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 104, "file_size": 1979886, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52099, "largest_seqno": 53657, "table_properties": {"data_size": 1973541, "index_size": 3356, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16179, "raw_average_key_size": 21, "raw_value_size": 1959839, "raw_average_value_size": 2561, "num_data_blocks": 145, "num_entries": 765, "num_filter_entries": 765, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092027, "oldest_key_time": 1769092027, "file_creation_time": 1769092129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 16277 microseconds, and 7423 cpu microseconds.
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.661814) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #104: 1979886 bytes OK
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.661854) [db/memtable_list.cc:519] [default] Level-0 commit table #104 started
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.666218) [db/memtable_list.cc:722] [default] Level-0 commit table #104: memtable #1 done
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.666250) EVENT_LOG_v1 {"time_micros": 1769092129666242, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.666280) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 3038218, prev total WAL file size 3038218, number of live WAL files 2.
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000100.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.668000) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [104(1933KB)], [102(9511KB)]
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129668059, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [104], "files_L6": [102], "score": -1, "input_data_size": 11719524, "oldest_snapshot_seqno": -1}
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #105: 9614 keys, 10082978 bytes, temperature: kUnknown
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129741912, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 105, "file_size": 10082978, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10028140, "index_size": 29702, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24069, "raw_key_size": 259579, "raw_average_key_size": 27, "raw_value_size": 9862066, "raw_average_value_size": 1025, "num_data_blocks": 1122, "num_entries": 9614, "num_filter_entries": 9614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 105, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.742170) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10082978 bytes
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.743485) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.5 rd, 136.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.3 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(11.0) write-amplify(5.1) OK, records in: 10131, records dropped: 517 output_compression: NoCompression
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.743505) EVENT_LOG_v1 {"time_micros": 1769092129743496, "job": 64, "event": "compaction_finished", "compaction_time_micros": 73917, "compaction_time_cpu_micros": 45131, "output_level": 6, "num_output_files": 1, "total_output_size": 10082978, "num_input_records": 10131, "num_output_records": 9614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129744373, "job": 64, "event": "table_file_deletion", "file_number": 104}
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000102.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092129746786, "job": 64, "event": "table_file_deletion", "file_number": 102}
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.667902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.746852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.746858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.746860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.746862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:28:49.746876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:28:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:49.851+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:49 np0005592159 nova_compute[226433]: 2026-01-22 14:28:49.950 226437 INFO nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Sending announce-self command to QEMU monitor. Attempt 2 of 3#033[00m
Jan 22 09:28:50 np0005592159 nova_compute[226433]: 2026-01-22 14:28:50.011 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:50.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:50 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:50.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:50.841+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:50 np0005592159 nova_compute[226433]: 2026-01-22 14:28:50.956 226437 INFO nova.virt.libvirt.driver [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Sending announce-self command to QEMU monitor. Attempt 3 of 3#033[00m
Jan 22 09:28:50 np0005592159 nova_compute[226433]: 2026-01-22 14:28:50.963 226437 DEBUG nova.compute.manager [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:28:51 np0005592159 nova_compute[226433]: 2026-01-22 14:28:51.006 226437 DEBUG nova.objects.instance [None req-ab1eb76a-3208-4ca1-854c-3d8d67f55fa9 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Jan 22 09:28:51 np0005592159 nova_compute[226433]: 2026-01-22 14:28:51.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:51 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:51 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:51.870+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:52 np0005592159 nova_compute[226433]: 2026-01-22 14:28:52.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:52.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:52.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:52 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:52.897+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:53 np0005592159 podman[248773]: 2026-01-22 14:28:53.016942046 +0000 UTC m=+0.061803094 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 09:28:53 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:53.878+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:54 np0005592159 nova_compute[226433]: 2026-01-22 14:28:54.254 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Check if temp file /var/lib/nova/instances/tmpwmqqt0dz exists to indicate shared storage is being used for migration. Exists? False _check_shared_storage_test_file /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10065#033[00m
Jan 22 09:28:54 np0005592159 nova_compute[226433]: 2026-01-22 14:28:54.255 226437 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] source check data is LibvirtLiveMigrateData(bdms=<?>,block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwmqqt0dz',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=<?>,old_vol_attachment_ids=<?>,serial_listen_addr=None,serial_listen_ports=<?>,src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=<?>,target_connect_addr=<?>,vifs=[VIFMigrateData],wait_for_vif_plugged=<?>) check_can_live_migrate_source /usr/lib/python3.9/site-packages/nova/compute/manager.py:8587#033[00m
Jan 22 09:28:54 np0005592159 nova_compute[226433]: 2026-01-22 14:28:54.335 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:54 np0005592159 nova_compute[226433]: 2026-01-22 14:28:54.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:54.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:54.895+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:55 np0005592159 nova_compute[226433]: 2026-01-22 14:28:55.013 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:55 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:55.937+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:56 np0005592159 nova_compute[226433]: 2026-01-22 14:28:56.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:56 np0005592159 nova_compute[226433]: 2026-01-22 14:28:56.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:28:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:56.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:56 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:56 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:28:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:56.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:28:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:56.901+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:57 np0005592159 nova_compute[226433]: 2026-01-22 14:28:57.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:57 np0005592159 nova_compute[226433]: 2026-01-22 14:28:57.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:28:57 np0005592159 nova_compute[226433]: 2026-01-22 14:28:57.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:28:57 np0005592159 nova_compute[226433]: 2026-01-22 14:28:57.572 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:28:57 np0005592159 nova_compute[226433]: 2026-01-22 14:28:57.572 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:28:57 np0005592159 nova_compute[226433]: 2026-01-22 14:28:57.572 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:28:57 np0005592159 nova_compute[226433]: 2026-01-22 14:28:57.572 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:28:57 np0005592159 nova_compute[226433]: 2026-01-22 14:28:57.572 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:28:57 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:57 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:28:57 np0005592159 nova_compute[226433]: 2026-01-22 14:28:57.839 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:28:57 np0005592159 nova_compute[226433]: 2026-01-22 14:28:57.841 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:28:57 np0005592159 nova_compute[226433]: 2026-01-22 14:28:57.841 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:28:57 np0005592159 nova_compute[226433]: 2026-01-22 14:28:57.841 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:28:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:57.864+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:58 np0005592159 nova_compute[226433]: 2026-01-22 14:28:58.223 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:28:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:28:58.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:28:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:28:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:28:58.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:28:58 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:58.864+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:28:59 np0005592159 nova_compute[226433]: 2026-01-22 14:28:59.040 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:28:59 np0005592159 nova_compute[226433]: 2026-01-22 14:28:59.337 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:28:59 np0005592159 nova_compute[226433]: 2026-01-22 14:28:59.378 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:28:59 np0005592159 nova_compute[226433]: 2026-01-22 14:28:59.378 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:28:59 np0005592159 nova_compute[226433]: 2026-01-22 14:28:59.379 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:59 np0005592159 nova_compute[226433]: 2026-01-22 14:28:59.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:28:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:28:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:28:59.817+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:28:59 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:00 np0005592159 nova_compute[226433]: 2026-01-22 14:29:00.014 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:00 np0005592159 nova_compute[226433]: 2026-01-22 14:29:00.491 226437 DEBUG nova.compute.manager [req-9b3612af-1e0b-4ea1-b204-7c5c83afd919 req-8232e538-2346-49ba-bc45-de86ea2ead0d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:00 np0005592159 nova_compute[226433]: 2026-01-22 14:29:00.491 226437 DEBUG oslo_concurrency.lockutils [req-9b3612af-1e0b-4ea1-b204-7c5c83afd919 req-8232e538-2346-49ba-bc45-de86ea2ead0d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:00 np0005592159 nova_compute[226433]: 2026-01-22 14:29:00.491 226437 DEBUG oslo_concurrency.lockutils [req-9b3612af-1e0b-4ea1-b204-7c5c83afd919 req-8232e538-2346-49ba-bc45-de86ea2ead0d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:00 np0005592159 nova_compute[226433]: 2026-01-22 14:29:00.492 226437 DEBUG oslo_concurrency.lockutils [req-9b3612af-1e0b-4ea1-b204-7c5c83afd919 req-8232e538-2346-49ba-bc45-de86ea2ead0d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:00 np0005592159 nova_compute[226433]: 2026-01-22 14:29:00.492 226437 DEBUG nova.compute.manager [req-9b3612af-1e0b-4ea1-b204-7c5c83afd919 req-8232e538-2346-49ba-bc45-de86ea2ead0d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:29:00 np0005592159 nova_compute[226433]: 2026-01-22 14:29:00.492 226437 DEBUG nova.compute.manager [req-9b3612af-1e0b-4ea1-b204-7c5c83afd919 req-8232e538-2346-49ba-bc45-de86ea2ead0d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 22 09:29:00 np0005592159 nova_compute[226433]: 2026-01-22 14:29:00.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:00 np0005592159 nova_compute[226433]: 2026-01-22 14:29:00.556 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:00 np0005592159 nova_compute[226433]: 2026-01-22 14:29:00.556 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:00 np0005592159 nova_compute[226433]: 2026-01-22 14:29:00.557 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:00 np0005592159 nova_compute[226433]: 2026-01-22 14:29:00.557 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:29:00 np0005592159 nova_compute[226433]: 2026-01-22 14:29:00.557 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:29:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:00.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:00.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:00.810+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:29:01 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2544514787' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.122 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:29:01 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:01 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:01 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.454 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.454 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.458 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.458 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.462 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.462 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.648 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.649 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4230MB free_disk=20.771652221679688GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.649 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.650 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.791 226437 INFO nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Took 6.35 seconds for pre_live_migration on destination host compute-0.ctlplane.example.com.#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.792 226437 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.812 226437 INFO nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating resource usage from migration 2fc416ea-9e83-4513-bb8e-4a3040aca5b2#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.823 226437 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] live_migration data is LibvirtLiveMigrateData(bdms=[LibvirtLiveMigrateBDMInfo],block_migration=False,disk_available_mb=19456,disk_over_commit=False,dst_numa_info=<?>,dst_supports_numa_live_migration=<?>,dst_wants_file_backed_memory=False,file_backed_memory_discard=<?>,filename='tmpwmqqt0dz',graphics_listen_addr_spice=127.0.0.1,graphics_listen_addr_vnc=::,image_type='rbd',instance_relative_path='5e2e07b8-ca9c-4abc-81b0-66964eb87fa4',is_shared_block_storage=True,is_shared_instance_path=False,is_volume_backed=True,migration=Migration(2fc416ea-9e83-4513-bb8e-4a3040aca5b2),old_vol_attachment_ids={6e173a8e-fd98-4de4-a470-2c50f67a6d48='430e38ad-b39f-4ad2-a8ef-a7940bd63b9e'},serial_listen_addr=None,serial_listen_ports=[],src_supports_native_luks=<?>,src_supports_numa_live_migration=<?>,supported_perf_events=[],target_connect_addr=None,vifs=[VIFMigrateData],wait_for_vif_plugged=True) _do_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:8939#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.827 226437 DEBUG nova.objects.instance [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lazy-loading 'migration_context' on Instance uuid 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.828 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Starting monitoring of live migration _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10639#033[00m
Jan 22 09:29:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:01.829+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.830 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Operation thread is still running _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10440#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.830 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration not running yet _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10449#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.840 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.840 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Migration 2fc416ea-9e83-4513-bb8e-4a3040aca5b2 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.840 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.841 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.841 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.841 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.841 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.842 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.842 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.921 226437 DEBUG nova.virt.libvirt.migration [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Find same serial number: pos=1, serial=6e173a8e-fd98-4de4-a470-2c50f67a6d48 _update_volume_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:242#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.922 226437 DEBUG nova.virt.libvirt.vif [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-22T14:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1735692043',display_name='tempest-LiveMigrationTest-server-1735692043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1735692043',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-22T14:28:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b4b5b635cbf4888966d80692b78281f',ramdisk_id='',reservation_id='r-gogvl9kh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1708062570',owner_user_name='tempest-LiveMigrationTest-1708062570-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:28:51Z,user_data=None,user_id='32df6d966d7540dd851bf51a1148be65',uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.922 226437 DEBUG nova.network.os_vif_util [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converting VIF {"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system"}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.923 226437 DEBUG nova.network.os_vif_util [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.923 226437 DEBUG nova.virt.libvirt.migration [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating guest XML with vif config: <interface type="ethernet">
Jan 22 09:29:01 np0005592159 nova_compute[226433]:  <mac address="fa:16:3e:f9:af:b6"/>
Jan 22 09:29:01 np0005592159 nova_compute[226433]:  <model type="virtio"/>
Jan 22 09:29:01 np0005592159 nova_compute[226433]:  <driver name="vhost" rx_queue_size="512"/>
Jan 22 09:29:01 np0005592159 nova_compute[226433]:  <mtu size="1442"/>
Jan 22 09:29:01 np0005592159 nova_compute[226433]:  <target dev="tap2b1b16d5-1e"/>
Jan 22 09:29:01 np0005592159 nova_compute[226433]: </interface>
Jan 22 09:29:01 np0005592159 nova_compute[226433]: _update_vif_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:388#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.924 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] About to invoke the migrate API _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10272#033[00m
Jan 22 09:29:01 np0005592159 nova_compute[226433]: 2026-01-22 14:29:01.998 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:29:02 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:02 np0005592159 nova_compute[226433]: 2026-01-22 14:29:02.334 226437 DEBUG nova.virt.libvirt.migration [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Current None elapsed 0 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 22 09:29:02 np0005592159 nova_compute[226433]: 2026-01-22 14:29:02.334 226437 INFO nova.virt.libvirt.migration [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Increasing downtime to 50 ms after 0 sec elapsed time#033[00m
Jan 22 09:29:02 np0005592159 nova_compute[226433]: 2026-01-22 14:29:02.412 226437 INFO nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration running for 0 secs, memory 100% remaining (bytes processed=0, remaining=0, total=0); disk 100% remaining (bytes processed=0, remaining=0, total=0).#033[00m
Jan 22 09:29:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:29:02 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3776743246' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:29:02 np0005592159 nova_compute[226433]: 2026-01-22 14:29:02.437 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:29:02 np0005592159 nova_compute[226433]: 2026-01-22 14:29:02.442 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:29:02 np0005592159 nova_compute[226433]: 2026-01-22 14:29:02.516 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:29:02 np0005592159 nova_compute[226433]: 2026-01-22 14:29:02.555 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:29:02 np0005592159 nova_compute[226433]: 2026-01-22 14:29:02.555 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:02.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:02.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:02.854+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:02 np0005592159 nova_compute[226433]: 2026-01-22 14:29:02.994 226437 DEBUG nova.virt.libvirt.migration [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Current 50 elapsed 1 steps [(0, 50), (150, 95), (300, 140), (450, 185), (600, 230), (750, 275), (900, 320), (1050, 365), (1200, 410), (1350, 455), (1500, 500)] update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:512#033[00m
Jan 22 09:29:02 np0005592159 nova_compute[226433]: 2026-01-22 14:29:02.995 226437 DEBUG nova.virt.libvirt.migration [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Downtime does not need to change update_downtime /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:525#033[00m
Jan 22 09:29:03 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.353 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092143.3532639, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.354 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Paused (Lifecycle Event)#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.370 226437 DEBUG nova.compute.manager [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.370 226437 DEBUG oslo_concurrency.lockutils [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.370 226437 DEBUG oslo_concurrency.lockutils [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.371 226437 DEBUG oslo_concurrency.lockutils [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.371 226437 DEBUG nova.compute.manager [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.371 226437 WARNING nova.compute.manager [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.371 226437 DEBUG nova.compute.manager [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-changed-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.372 226437 DEBUG nova.compute.manager [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Refreshing instance network info cache due to event network-changed-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.372 226437 DEBUG oslo_concurrency.lockutils [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.372 226437 DEBUG oslo_concurrency.lockutils [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.372 226437 DEBUG nova.network.neutron [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Refreshing network info cache for port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.374 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.377 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: migrating, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.403 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] During sync_power_state the instance has a pending task (migrating). Skip.#033[00m
Jan 22 09:29:03 np0005592159 kernel: tap2b1b16d5-1e (unregistering): left promiscuous mode
Jan 22 09:29:03 np0005592159 NetworkManager[49000]: <info>  [1769092143.5524] device (tap2b1b16d5-1e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 22 09:29:03 np0005592159 ovn_controller[133156]: 2026-01-22T14:29:03Z|00052|binding|INFO|Releasing lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b from this chassis (sb_readonly=0)
Jan 22 09:29:03 np0005592159 ovn_controller[133156]: 2026-01-22T14:29:03Z|00053|binding|INFO|Setting lport 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b down in Southbound
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.581 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:03 np0005592159 ovn_controller[133156]: 2026-01-22T14:29:03Z|00054|binding|INFO|Removing iface tap2b1b16d5-1e ovn-installed in OVS
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.583 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.588 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:af:b6 10.100.0.3'], port_security=['fa:16:3e:f9:af:b6 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com,compute-0.ctlplane.example.com', 'activation-strategy': 'rarp', 'additional-chassis-activated': '7335e41f-b1b8-4c04-9c19-8788162d5bb4'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5e2e07b8-ca9c-4abc-81b0-66964eb87fa4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6b4b5b635cbf4888966d80692b78281f', 'neutron:revision_number': '18', 'neutron:security_group_ids': 'eb69c488-c37b-4857-8e13-8b621218738b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64b04a22-643c-4588-a6a6-158f6179c5fc, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:29:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.589 143497 INFO neutron.agent.ovn.metadata.agent [-] Port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b in datapath b247a422-e88b-4d6e-9b42-d4947ce89ea4 unbound from our chassis#033[00m
Jan 22 09:29:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.592 143497 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b247a422-e88b-4d6e-9b42-d4947ce89ea4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 22 09:29:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.594 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[a0a765db-556b-400d-b707-11fb8f0b7907]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.595 143497 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 namespace which is not needed anymore#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.609 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:03 np0005592159 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000012.scope: Deactivated successfully.
Jan 22 09:29:03 np0005592159 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000012.scope: Consumed 2.080s CPU time.
Jan 22 09:29:03 np0005592159 systemd-machined[194970]: Machine qemu-4-instance-00000012 terminated.
Jan 22 09:29:03 np0005592159 virtqemud[225907]: Unable to get XATTR trusted.libvirt.security.ref_selinux on volumes/volume-6e173a8e-fd98-4de4-a470-2c50f67a6d48: No such file or directory
Jan 22 09:29:03 np0005592159 virtqemud[225907]: Unable to get XATTR trusted.libvirt.security.ref_dac on volumes/volume-6e173a8e-fd98-4de4-a470-2c50f67a6d48: No such file or directory
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.735 226437 DEBUG nova.virt.libvirt.guest [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Domain has shutdown/gone away: Requested operation is not valid: domain is not running get_job_info /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:688#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.736 226437 INFO nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration operation has completed#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.736 226437 INFO nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] _post_live_migration() is started..#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.737 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migrate API has completed _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10279#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.737 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration operation thread has finished _live_migration_operation /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10327#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.737 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10630#033[00m
Jan 22 09:29:03 np0005592159 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[248753]: [NOTICE]   (248757) : haproxy version is 2.8.14-c23fe91
Jan 22 09:29:03 np0005592159 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[248753]: [NOTICE]   (248757) : path to executable is /usr/sbin/haproxy
Jan 22 09:29:03 np0005592159 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[248753]: [WARNING]  (248757) : Exiting Master process...
Jan 22 09:29:03 np0005592159 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[248753]: [ALERT]    (248757) : Current worker (248759) exited with code 143 (Terminated)
Jan 22 09:29:03 np0005592159 neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4[248753]: [WARNING]  (248757) : All workers exited. Exiting... (0)
Jan 22 09:29:03 np0005592159 systemd[1]: libpod-3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b.scope: Deactivated successfully.
Jan 22 09:29:03 np0005592159 podman[248924]: 2026-01-22 14:29:03.769958997 +0000 UTC m=+0.060919766 container died 3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:29:03 np0005592159 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b-userdata-shm.mount: Deactivated successfully.
Jan 22 09:29:03 np0005592159 systemd[1]: var-lib-containers-storage-overlay-30a4f7c6c9d491773a41a6ac99e5ad17b247e5c5f1025a81646d807b0889471c-merged.mount: Deactivated successfully.
Jan 22 09:29:03 np0005592159 podman[248924]: 2026-01-22 14:29:03.816821692 +0000 UTC m=+0.107782411 container cleanup 3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:29:03 np0005592159 systemd[1]: libpod-conmon-3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b.scope: Deactivated successfully.
Jan 22 09:29:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:03.872+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:03 np0005592159 podman[248971]: 2026-01-22 14:29:03.875268114 +0000 UTC m=+0.037427223 container remove 3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:29:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.881 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[bec22f60-683d-492d-bd25-b6f39ac9c8a2]: (4, ('Thu Jan 22 02:29:03 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 (3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b)\n3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b\nThu Jan 22 02:29:03 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 (3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b)\n3f8d50ba790e2d05462a6a55fd8218af8632a807958c685028c074be3cd8b14b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.882 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[0892fd70-b84e-414b-b826-8f951fb39883]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.883 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb247a422-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.884 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:03 np0005592159 kernel: tapb247a422-e0: left promiscuous mode
Jan 22 09:29:03 np0005592159 nova_compute[226433]: 2026-01-22 14:29:03.904 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.907 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[25afedba-6a56-4ded-9041-ff040eda79c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.918 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[eeaf4b66-ea25-4f91-adc4-016bfcb97a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.919 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[439ede5f-3505-441f-8007-6c427d52773b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.931 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[dc570c2f-cc50-4a0b-8d51-03c69e3aa01b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 597638, 'reachable_time': 15837, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248991, 'error': None, 'target': 'ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:03 np0005592159 systemd[1]: run-netns-ovnmeta\x2db247a422\x2de88b\x2d4d6e\x2d9b42\x2dd4947ce89ea4.mount: Deactivated successfully.
Jan 22 09:29:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.934 143856 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b247a422-e88b-4d6e-9b42-d4947ce89ea4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 22 09:29:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:03.934 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[3b81f5ab-0c77-438e-aca6-144f88aadd41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:29:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:04 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.338 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.416 226437 DEBUG nova.compute.manager [req-c232e5ea-fd5c-4af3-947b-657f9a9592e6 req-4592288a-f889-4b2b-ac7f-fdd873b6a184 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.417 226437 DEBUG oslo_concurrency.lockutils [req-c232e5ea-fd5c-4af3-947b-657f9a9592e6 req-4592288a-f889-4b2b-ac7f-fdd873b6a184 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.417 226437 DEBUG oslo_concurrency.lockutils [req-c232e5ea-fd5c-4af3-947b-657f9a9592e6 req-4592288a-f889-4b2b-ac7f-fdd873b6a184 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.417 226437 DEBUG oslo_concurrency.lockutils [req-c232e5ea-fd5c-4af3-947b-657f9a9592e6 req-4592288a-f889-4b2b-ac7f-fdd873b6a184 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.417 226437 DEBUG nova.compute.manager [req-c232e5ea-fd5c-4af3-947b-657f9a9592e6 req-4592288a-f889-4b2b-ac7f-fdd873b6a184 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.418 226437 DEBUG nova.compute.manager [req-c232e5ea-fd5c-4af3-947b-657f9a9592e6 req-4592288a-f889-4b2b-ac7f-fdd873b6a184 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 22 09:29:04 np0005592159 podman[248993]: 2026-01-22 14:29:04.480436716 +0000 UTC m=+0.089303126 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 09:29:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:04.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:29:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:04.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:29:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:04.853+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.901 226437 DEBUG nova.network.neutron [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Activated binding for port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b and host compute-0.ctlplane.example.com migrate_instance_start /usr/lib/python3.9/site-packages/nova/network/neutron.py:3181#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.902 226437 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Calling driver.post_live_migration_at_source with original source VIFs from migrate_data: [{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9326#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.903 226437 DEBUG nova.virt.libvirt.vif [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-22T14:28:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-LiveMigrationTest-server-1735692043',display_name='tempest-LiveMigrationTest-server-1735692043',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-livemigrationtest-server-1735692043',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-22T14:28:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='6b4b5b635cbf4888966d80692b78281f',ramdisk_id='',reservation_id='r-gogvl9kh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-LiveMigrationTest-1708062570',owner_user_name='tempest-LiveMigrationTest-1708062570-project-member'},tags=<?>,task_state='migrating',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-22T14:28:53Z,user_data=None,user_id='32df6d966d7540dd851bf51a1148be65',uuid=5e2e07b8-ca9c-4abc-81b0-66964eb87fa4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.903 226437 DEBUG nova.network.os_vif_util [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converting VIF {"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.905 226437 DEBUG nova.network.os_vif_util [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.905 226437 DEBUG os_vif [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.908 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.908 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2b1b16d5-1e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.911 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.914 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.920 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.924 226437 INFO os_vif [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f9:af:b6,bridge_name='br-int',has_traffic_filtering=True,id=2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b,network=Network(b247a422-e88b-4d6e-9b42-d4947ce89ea4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2b1b16d5-1e')#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.924 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.925 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.925 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.free_pci_device_allocations_for_instance" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.925 226437 DEBUG nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Calling driver.cleanup from _post_live_migration _post_live_migration /usr/lib/python3.9/site-packages/nova/compute/manager.py:9349#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.926 226437 INFO nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Deleting instance files /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4_del#033[00m
Jan 22 09:29:04 np0005592159 nova_compute[226433]: 2026-01-22 14:29:04.927 226437 INFO nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Deletion of /var/lib/nova/instances/5e2e07b8-ca9c-4abc-81b0-66964eb87fa4_del complete#033[00m
Jan 22 09:29:05 np0005592159 nova_compute[226433]: 2026-01-22 14:29:05.005 226437 DEBUG nova.compute.manager [req-b9072079-8384-4d67-a561-d4d999c23a50 req-4d050623-3efb-45a2-9944-6aad37ce0b25 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:05 np0005592159 nova_compute[226433]: 2026-01-22 14:29:05.006 226437 DEBUG oslo_concurrency.lockutils [req-b9072079-8384-4d67-a561-d4d999c23a50 req-4d050623-3efb-45a2-9944-6aad37ce0b25 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:05 np0005592159 nova_compute[226433]: 2026-01-22 14:29:05.006 226437 DEBUG oslo_concurrency.lockutils [req-b9072079-8384-4d67-a561-d4d999c23a50 req-4d050623-3efb-45a2-9944-6aad37ce0b25 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:05 np0005592159 nova_compute[226433]: 2026-01-22 14:29:05.007 226437 DEBUG oslo_concurrency.lockutils [req-b9072079-8384-4d67-a561-d4d999c23a50 req-4d050623-3efb-45a2-9944-6aad37ce0b25 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:05 np0005592159 nova_compute[226433]: 2026-01-22 14:29:05.007 226437 DEBUG nova.compute.manager [req-b9072079-8384-4d67-a561-d4d999c23a50 req-4d050623-3efb-45a2-9944-6aad37ce0b25 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:29:05 np0005592159 nova_compute[226433]: 2026-01-22 14:29:05.007 226437 DEBUG nova.compute.manager [req-b9072079-8384-4d67-a561-d4d999c23a50 req-4d050623-3efb-45a2-9944-6aad37ce0b25 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-unplugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with task_state migrating. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Jan 22 09:29:05 np0005592159 nova_compute[226433]: 2026-01-22 14:29:05.017 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:05 np0005592159 nova_compute[226433]: 2026-01-22 14:29:05.301 226437 DEBUG nova.network.neutron [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updated VIF entry in instance network info cache for port 2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 09:29:05 np0005592159 nova_compute[226433]: 2026-01-22 14:29:05.301 226437 DEBUG nova.network.neutron [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Updating instance_info_cache with network_info: [{"id": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "address": "fa:16:3e:f9:af:b6", "network": {"id": "b247a422-e88b-4d6e-9b42-d4947ce89ea4", "bridge": "br-int", "label": "tempest-LiveMigrationTest-913693761-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6b4b5b635cbf4888966d80692b78281f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2b1b16d5-1e", "ovs_interfaceid": "2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {"os_vif_delegation": true, "migrating_to": "compute-0.ctlplane.example.com"}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:29:05 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:05 np0005592159 nova_compute[226433]: 2026-01-22 14:29:05.480 226437 DEBUG oslo_concurrency.lockutils [req-0b261032-686c-43be-8327-a3d3952bcd39 req-90a4ea6e-7620-421b-9278-00e171c0a799 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-5e2e07b8-ca9c-4abc-81b0-66964eb87fa4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:29:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:05.809+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:06 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:06 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.550 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.551 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.551 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.551 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.551 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.552 226437 WARNING nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.552 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.552 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.552 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.553 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.553 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.553 226437 WARNING nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.553 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.554 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.554 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.554 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.554 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.554 226437 WARNING nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.555 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.555 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.555 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.555 226437 DEBUG oslo_concurrency.lockutils [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.556 226437 DEBUG nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] No waiting events found dispatching network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:29:06 np0005592159 nova_compute[226433]: 2026-01-22 14:29:06.556 226437 WARNING nova.compute.manager [req-a646b1f5-5b61-40cd-a0ea-c0fa9273858d req-997c9c6f-3873-41de-9adb-b1732db367ba 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Received unexpected event network-vif-plugged-2b1b16d5-1ed9-4cc8-b865-c74a5de4f29b for instance with vm_state active and task_state migrating.#033[00m
Jan 22 09:29:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:06.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:06.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:06.816+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:07 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:07.825+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:08 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:08 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:08.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:08.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:08.852+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:09.840+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:09 np0005592159 nova_compute[226433]: 2026-01-22 14:29:09.912 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:10 np0005592159 nova_compute[226433]: 2026-01-22 14:29:10.018 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:10 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:10 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:29:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:10.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:29:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:10.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:10.805+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:11 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:11.779+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:11 np0005592159 nova_compute[226433]: 2026-01-22 14:29:11.832 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:11 np0005592159 nova_compute[226433]: 2026-01-22 14:29:11.832 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:11 np0005592159 nova_compute[226433]: 2026-01-22 14:29:11.832 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "5e2e07b8-ca9c-4abc-81b0-66964eb87fa4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:11 np0005592159 nova_compute[226433]: 2026-01-22 14:29:11.892 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:11 np0005592159 nova_compute[226433]: 2026-01-22 14:29:11.892 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:11 np0005592159 nova_compute[226433]: 2026-01-22 14:29:11.893 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:11 np0005592159 nova_compute[226433]: 2026-01-22 14:29:11.893 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:29:11 np0005592159 nova_compute[226433]: 2026-01-22 14:29:11.893 226437 DEBUG oslo_concurrency.processutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:29:12 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:29:12 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3324759149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.355 226437 DEBUG oslo_concurrency.processutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.464 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.464 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.468 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.468 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:29:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:12.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:12 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.643 226437 WARNING nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.645 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4388MB free_disk=20.771652221679688GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.645 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.646 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:12.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.748 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Migration for instance 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 refers to another host's instance! _pair_instances_to_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:903#033[00m
Jan 22 09:29:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:12.793+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.800 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Skipping migration as instance is neither resizing nor live-migrating. _update_usage_from_migrations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1491#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.832 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.833 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Migration 2fc416ea-9e83-4513-bb8e-4a3040aca5b2 is active on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.833 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.833 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.833 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.834 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.834 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.834 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.834 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:29:12 np0005592159 nova_compute[226433]: 2026-01-22 14:29:12.971 226437 DEBUG oslo_concurrency.processutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:29:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:29:13 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/731787681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:29:13 np0005592159 nova_compute[226433]: 2026-01-22 14:29:13.452 226437 DEBUG oslo_concurrency.processutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:29:13 np0005592159 nova_compute[226433]: 2026-01-22 14:29:13.459 226437 DEBUG nova.compute.provider_tree [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:29:13 np0005592159 nova_compute[226433]: 2026-01-22 14:29:13.484 226437 DEBUG nova.scheduler.client.report [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:29:13 np0005592159 nova_compute[226433]: 2026-01-22 14:29:13.521 226437 DEBUG nova.compute.resource_tracker [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:29:13 np0005592159 nova_compute[226433]: 2026-01-22 14:29:13.521 226437 DEBUG oslo_concurrency.lockutils [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:13 np0005592159 nova_compute[226433]: 2026-01-22 14:29:13.526 226437 INFO nova.compute.manager [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Migrating instance to compute-0.ctlplane.example.com finished successfully.#033[00m
Jan 22 09:29:13 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:13 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:13 np0005592159 nova_compute[226433]: 2026-01-22 14:29:13.675 226437 INFO nova.scheduler.client.report [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] Deleted allocation for migration 2fc416ea-9e83-4513-bb8e-4a3040aca5b2#033[00m
Jan 22 09:29:13 np0005592159 nova_compute[226433]: 2026-01-22 14:29:13.675 226437 DEBUG nova.virt.libvirt.driver [None req-650b63d9-1772-4aee-949d-9d15c225509b 3005bb7eb8144d70b17bc8ad4fb97b3d c54832dab66e42848aa8ba0095d46051 - - default default] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Live migration monitoring is all done _live_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10662#033[00m
Jan 22 09:29:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:13.753+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:14.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:14 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:14.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:14.766+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:14 np0005592159 nova_compute[226433]: 2026-01-22 14:29:14.914 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:15 np0005592159 nova_compute[226433]: 2026-01-22 14:29:15.020 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:15 np0005592159 nova_compute[226433]: 2026-01-22 14:29:15.028 226437 DEBUG oslo_concurrency.lockutils [None req-948507e0-498f-43bb-aede-57b100eccc71 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "a0b3924b-4422-47c5-ba40-748e41b14d00" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:29:15 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3774593624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:29:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:29:15 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3774593624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:29:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:15.759+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:15 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:16 np0005592159 nova_compute[226433]: 2026-01-22 14:29:16.338 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:16.338 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:29:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:16.339 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:29:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:16.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:16.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:16.749+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:16 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:17.763+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:17 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:18.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:18.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:18 np0005592159 nova_compute[226433]: 2026-01-22 14:29:18.735 226437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769092143.733853, 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:29:18 np0005592159 nova_compute[226433]: 2026-01-22 14:29:18.736 226437 INFO nova.compute.manager [-] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] VM Stopped (Lifecycle Event)#033[00m
Jan 22 09:29:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:18.745+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:18 np0005592159 nova_compute[226433]: 2026-01-22 14:29:18.797 226437 DEBUG nova.compute.manager [None req-5fce806e-e6a3-4ddf-9ddb-50be8da55f5d - - - - - -] [instance: 5e2e07b8-ca9c-4abc-81b0-66964eb87fa4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:29:18 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:19.775+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:19 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:19 np0005592159 nova_compute[226433]: 2026-01-22 14:29:19.916 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:20 np0005592159 nova_compute[226433]: 2026-01-22 14:29:20.022 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:29:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:20.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:29:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:20.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:20.794+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:20 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:21 np0005592159 nova_compute[226433]: 2026-01-22 14:29:21.560 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:21.841+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:21 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:22.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:22.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:22.812+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:22 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:22 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:22 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:23 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:23.341 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:29:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:23.818+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:23 np0005592159 podman[249127]: 2026-01-22 14:29:23.99153615 +0000 UTC m=+0.055434876 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:29:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:24 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:24.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:29:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:24.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:29:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:24.803+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:24 np0005592159 nova_compute[226433]: 2026-01-22 14:29:24.918 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:25 np0005592159 nova_compute[226433]: 2026-01-22 14:29:25.024 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:25 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:25.790+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:26 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:26.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:26.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:26.764+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:27 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:27 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:29:27 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:29:27 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:29:27 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:27.771+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:28.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:28.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:28 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:28.756+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:29 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:29.793+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:29 np0005592159 nova_compute[226433]: 2026-01-22 14:29:29.921 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:30 np0005592159 nova_compute[226433]: 2026-01-22 14:29:30.026 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:29:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:30.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:29:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:30.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:30 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:30.833+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:31.788+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:31 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:32.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:32.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:32.790+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:33 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:29:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:29:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:33.768+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:34 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:34 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:34 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:29:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:34.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:29:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:34.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:34.761+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:34 np0005592159 nova_compute[226433]: 2026-01-22 14:29:34.924 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:35 np0005592159 nova_compute[226433]: 2026-01-22 14:29:35.027 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:35 np0005592159 podman[249334]: 2026-01-22 14:29:35.02835462 +0000 UTC m=+0.091069411 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:29:35 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:35.761+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:36 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:36.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:36.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:36.763+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:37 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:37.755+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:38 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:29:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:38.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:29:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:38.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:38.791+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:39 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:39.835+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:39 np0005592159 nova_compute[226433]: 2026-01-22 14:29:39.926 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:40 np0005592159 nova_compute[226433]: 2026-01-22 14:29:40.029 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:40 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:40.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:40.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:40.806+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:41 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:41.824+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:42.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:42 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:42.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:42.831+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:43 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:43 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:43.804+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:44 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:44.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:44.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:44.772+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:44 np0005592159 nova_compute[226433]: 2026-01-22 14:29:44.927 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:45 np0005592159 nova_compute[226433]: 2026-01-22 14:29:45.031 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:45 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:45.812+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:46 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:46.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:29:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:46.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:29:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:46.828+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:47.204 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:29:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:47.204 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:29:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:29:47.204 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:29:47 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:47 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:47.861+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:29:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:48.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:29:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:48.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:48.820+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:49 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:49 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:49.783+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:49 np0005592159 nova_compute[226433]: 2026-01-22 14:29:49.929 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:50 np0005592159 nova_compute[226433]: 2026-01-22 14:29:50.034 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:50.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:50 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:50.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:50.803+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:51 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:51.830+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:52 np0005592159 nova_compute[226433]: 2026-01-22 14:29:52.556 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:52 np0005592159 nova_compute[226433]: 2026-01-22 14:29:52.557 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:52.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:52.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:52 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:52.817+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:53 np0005592159 nova_compute[226433]: 2026-01-22 14:29:53.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:53 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:29:53 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:53.831+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:54.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:29:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:54.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:29:54 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:54.851+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:54 np0005592159 nova_compute[226433]: 2026-01-22 14:29:54.932 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:55 np0005592159 podman[249423]: 2026-01-22 14:29:55.014815245 +0000 UTC m=+0.068663266 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:29:55 np0005592159 nova_compute[226433]: 2026-01-22 14:29:55.037 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:29:55 np0005592159 nova_compute[226433]: 2026-01-22 14:29:55.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:55 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:55.874+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:56.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:56.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:56.849+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:57 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:57 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:57.855+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:58 np0005592159 nova_compute[226433]: 2026-01-22 14:29:58.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:58 np0005592159 nova_compute[226433]: 2026-01-22 14:29:58.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:29:58 np0005592159 ovn_controller[133156]: 2026-01-22T14:29:58Z|00055|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Jan 22 09:29:58 np0005592159 nova_compute[226433]: 2026-01-22 14:29:58.585 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Jan 22 09:29:58 np0005592159 nova_compute[226433]: 2026-01-22 14:29:58.587 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:58 np0005592159 nova_compute[226433]: 2026-01-22 14:29:58.587 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:58 np0005592159 nova_compute[226433]: 2026-01-22 14:29:58.588 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:29:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:29:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:29:58.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:29:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:29:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:29:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:29:58.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:29:58 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:58.876+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:29:59 np0005592159 nova_compute[226433]: 2026-01-22 14:29:59.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:29:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:29:59.853+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:29:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:59 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:59 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:29:59 np0005592159 nova_compute[226433]: 2026-01-22 14:29:59.935 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:00 np0005592159 nova_compute[226433]: 2026-01-22 14:30:00.039 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:00 np0005592159 nova_compute[226433]: 2026-01-22 14:30:00.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:00 np0005592159 nova_compute[226433]: 2026-01-22 14:30:00.548 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:30:00 np0005592159 nova_compute[226433]: 2026-01-22 14:30:00.549 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:30:00 np0005592159 nova_compute[226433]: 2026-01-22 14:30:00.549 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:30:00 np0005592159 nova_compute[226433]: 2026-01-22 14:30:00.549 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:30:00 np0005592159 nova_compute[226433]: 2026-01-22 14:30:00.550 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:30:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:00.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:00.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:00.870+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:00 np0005592159 ceph-mon[77081]: Health detail: HEALTH_WARN 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops
Jan 22 09:30:00 np0005592159 ceph-mon[77081]: [WRN] SLOW_OPS: 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops
Jan 22 09:30:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:30:01 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1158373973' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.017 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.264 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.265 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.268 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.269 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.434 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.435 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4389MB free_disk=20.77179718017578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.435 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.435 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.600 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.600 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.601 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.601 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.601 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.601 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.601 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.602 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=20GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:30:01 np0005592159 nova_compute[226433]: 2026-01-22 14:30:01.771 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:30:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:01.856+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:01 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:01 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:30:02 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1940122581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:30:02 np0005592159 nova_compute[226433]: 2026-01-22 14:30:02.249 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:30:02 np0005592159 nova_compute[226433]: 2026-01-22 14:30:02.255 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:30:02 np0005592159 nova_compute[226433]: 2026-01-22 14:30:02.331 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:30:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:02.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:02 np0005592159 nova_compute[226433]: 2026-01-22 14:30:02.701 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:30:02 np0005592159 nova_compute[226433]: 2026-01-22 14:30:02.701 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:30:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:02.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:02.835+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:02 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:02 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:03.816+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:03 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:04.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:04.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:04.779+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:04 np0005592159 nova_compute[226433]: 2026-01-22 14:30:04.938 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:04 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:05 np0005592159 nova_compute[226433]: 2026-01-22 14:30:05.040 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:05.812+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:06 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:06 np0005592159 podman[249543]: 2026-01-22 14:30:06.041053372 +0000 UTC m=+0.105307337 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 22 09:30:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:06.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:06.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:06.785+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:07 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:07 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:30:07.244 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:30:07 np0005592159 nova_compute[226433]: 2026-01-22 14:30:07.244 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:07 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:30:07.245 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:30:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:07.817+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:08 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:08 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:08.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:08.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:08.820+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:09 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:09.870+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:09 np0005592159 nova_compute[226433]: 2026-01-22 14:30:09.941 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:10 np0005592159 nova_compute[226433]: 2026-01-22 14:30:10.043 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:10.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:10.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:10.866+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:11 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:11.852+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:12 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:12 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:12 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:30:12.246 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:30:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:12.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:12.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:12.892+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:13 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:13.889+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:14 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:30:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:14.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:14 np0005592159 nova_compute[226433]: 2026-01-22 14:30:14.697 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:14.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:14.934+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:14 np0005592159 nova_compute[226433]: 2026-01-22 14:30:14.945 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:15 np0005592159 nova_compute[226433]: 2026-01-22 14:30:15.047 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:15 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:15.982+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:16 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:16.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:16.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:16.988+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:17 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:17.950+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:18 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:18 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:18.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:18.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:18.932+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:19 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:19 np0005592159 nova_compute[226433]: 2026-01-22 14:30:19.948 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:19.960+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:20 np0005592159 nova_compute[226433]: 2026-01-22 14:30:20.048 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:20 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:20.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:20.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:20.986+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:21 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:21.986+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:22 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:22.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:22.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:22.937+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:23 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:23 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:23.899+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:24 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:24.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:24.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:24.935+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:24 np0005592159 nova_compute[226433]: 2026-01-22 14:30:24.950 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:25 np0005592159 nova_compute[226433]: 2026-01-22 14:30:25.049 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:25 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:25.892+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:26 np0005592159 podman[249631]: 2026-01-22 14:30:26.007475096 +0000 UTC m=+0.062600140 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 09:30:26 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:26.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:26.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:26.941+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:27 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:27.959+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:28 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:28 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:28.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:28.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:28.971+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:29 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:29.928+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:29 np0005592159 nova_compute[226433]: 2026-01-22 14:30:29.952 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:30 np0005592159 nova_compute[226433]: 2026-01-22 14:30:30.051 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:30 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:30.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:30.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:30.897+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:31 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:31.849+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:32 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:32 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:32.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:32.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:32.876+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:33 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:33.914+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:34 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:30:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:30:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:30:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:34.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:34.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:34.916+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:34 np0005592159 nova_compute[226433]: 2026-01-22 14:30:34.955 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:35 np0005592159 nova_compute[226433]: 2026-01-22 14:30:35.053 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:35 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:35.959+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:36 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:36.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:36.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:37.009+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:37 np0005592159 podman[249789]: 2026-01-22 14:30:37.065243384 +0000 UTC m=+0.120809776 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 09:30:37 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:38.009+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:38 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:38.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:38.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:38.972+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:39 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:39 np0005592159 nova_compute[226433]: 2026-01-22 14:30:39.958 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:39.957+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:40 np0005592159 nova_compute[226433]: 2026-01-22 14:30:40.056 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:40 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:40 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:30:40 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:30:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:40.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:40.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:40.922+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:41 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:41.971+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:42 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:42 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:42.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:42.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:42.939+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:43 np0005592159 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 22 09:30:43 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:43.893+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:44 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:30:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:44.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:44.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:44.922+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:44 np0005592159 nova_compute[226433]: 2026-01-22 14:30:44.962 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:45 np0005592159 nova_compute[226433]: 2026-01-22 14:30:45.057 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:45 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:45.958+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:46 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:46.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:46.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:47.003+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:30:47.206 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:30:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:30:47.206 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:30:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:30:47.206 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:30:47 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:47 np0005592159 ceph-mon[77081]: Health check update: 31 slow ops, oldest one blocked for 3238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:48.042+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:48 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:30:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:48.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:30:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:48.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:49.063+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:49 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:49 np0005592159 nova_compute[226433]: 2026-01-22 14:30:49.964 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:50.046+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:50 np0005592159 nova_compute[226433]: 2026-01-22 14:30:50.058 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:50 np0005592159 nova_compute[226433]: 2026-01-22 14:30:50.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:50.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:50 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:50.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:51.014+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:51 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:52.055+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:52.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:52.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:52 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:53.020+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:53 np0005592159 nova_compute[226433]: 2026-01-22 14:30:53.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:53 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:53 np0005592159 ceph-mon[77081]: Health check update: 31 slow ops, oldest one blocked for 3243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:30:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:54.020+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:54 np0005592159 nova_compute[226433]: 2026-01-22 14:30:54.512 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:54.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:54.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:54 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:54 np0005592159 nova_compute[226433]: 2026-01-22 14:30:54.967 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:54.989+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:55 np0005592159 nova_compute[226433]: 2026-01-22 14:30:55.061 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:30:55 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:56.020+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:30:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:56.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:30:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:56.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:56 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:56 np0005592159 podman[249927]: 2026-01-22 14:30:56.997899599 +0000 UTC m=+0.053717321 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:30:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:57.066+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:57 np0005592159 nova_compute[226433]: 2026-01-22 14:30:57.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:57 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:57 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:58.108+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:58 np0005592159 nova_compute[226433]: 2026-01-22 14:30:58.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:58 np0005592159 nova_compute[226433]: 2026-01-22 14:30:58.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:30:58 np0005592159 nova_compute[226433]: 2026-01-22 14:30:58.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:30:58 np0005592159 nova_compute[226433]: 2026-01-22 14:30:58.545 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:30:58 np0005592159 nova_compute[226433]: 2026-01-22 14:30:58.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:30:58 np0005592159 nova_compute[226433]: 2026-01-22 14:30:58.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:30:58 np0005592159 nova_compute[226433]: 2026-01-22 14:30:58.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:30:58 np0005592159 nova_compute[226433]: 2026-01-22 14:30:58.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:30:58 np0005592159 nova_compute[226433]: 2026-01-22 14:30:58.566 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquiring lock "001ba9a6-ba0c-438d-8150-5cfbcec3d34f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:30:58 np0005592159 nova_compute[226433]: 2026-01-22 14:30:58.566 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "001ba9a6-ba0c-438d-8150-5cfbcec3d34f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:30:58 np0005592159 nova_compute[226433]: 2026-01-22 14:30:58.582 226437 DEBUG nova.compute.manager [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:30:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:30:58.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:30:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:30:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:30:58.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:30:58 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.078 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.079 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.086 226437 DEBUG nova.virt.hardware [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.086 226437 INFO nova.compute.claims [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Claim successful on node compute-2.ctlplane.example.com#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.098 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.098 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.098 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.098 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:30:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:30:59.128+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:30:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:30:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.334 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.481 226437 DEBUG oslo_concurrency.processutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.611 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.628 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.629 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.629 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.629 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.630 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.630 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:30:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:30:59 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/368227021' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.896 226437 DEBUG oslo_concurrency.processutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.903 226437 DEBUG nova.compute.provider_tree [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:30:59 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.928 226437 DEBUG nova.scheduler.client.report [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.955 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.956 226437 DEBUG nova.compute.manager [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:30:59 np0005592159 nova_compute[226433]: 2026-01-22 14:30:59.970 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.007 226437 DEBUG nova.compute.manager [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.008 226437 DEBUG nova.network.neutron [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.028 226437 INFO nova.virt.libvirt.driver [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.046 226437 DEBUG nova.compute.manager [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.062 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.136 226437 DEBUG nova.compute.manager [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.137 226437 DEBUG nova.virt.libvirt.driver [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.137 226437 INFO nova.virt.libvirt.driver [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Creating image(s)#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.162 226437 DEBUG nova.storage.rbd_utils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] rbd image 001ba9a6-ba0c-438d-8150-5cfbcec3d34f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:31:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:00.165+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.186 226437 DEBUG nova.storage.rbd_utils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] rbd image 001ba9a6-ba0c-438d-8150-5cfbcec3d34f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.209 226437 DEBUG nova.storage.rbd_utils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] rbd image 001ba9a6-ba0c-438d-8150-5cfbcec3d34f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.212 226437 DEBUG oslo_concurrency.processutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.232 226437 DEBUG nova.policy [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '112b71a99add4ffeb28392e66d1a3d24', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06252abc0be74ac08438db3d2f76db14', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.272 226437 DEBUG oslo_concurrency.processutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.272 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.273 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.273 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.294 226437 DEBUG nova.storage.rbd_utils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] rbd image 001ba9a6-ba0c-438d-8150-5cfbcec3d34f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:31:00 np0005592159 nova_compute[226433]: 2026-01-22 14:31:00.297 226437 DEBUG oslo_concurrency.processutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 001ba9a6-ba0c-438d-8150-5cfbcec3d34f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:31:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:00.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:00.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:00 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:31:01 np0005592159 nova_compute[226433]: 2026-01-22 14:31:01.152 226437 DEBUG nova.network.neutron [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Successfully created port: ecd36baa-6fcf-48f7-a5a5-0e085089f614 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 22 09:31:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:01.185+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:01 np0005592159 nova_compute[226433]: 2026-01-22 14:31:01.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:31:01 np0005592159 nova_compute[226433]: 2026-01-22 14:31:01.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:31:01 np0005592159 nova_compute[226433]: 2026-01-22 14:31:01.541 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:31:01 np0005592159 nova_compute[226433]: 2026-01-22 14:31:01.542 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:31:01 np0005592159 nova_compute[226433]: 2026-01-22 14:31:01.542 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:31:01 np0005592159 nova_compute[226433]: 2026-01-22 14:31:01.542 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:31:01 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:31:01 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/475015090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:31:01 np0005592159 nova_compute[226433]: 2026-01-22 14:31:01.955 226437 DEBUG nova.network.neutron [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Successfully updated port: ecd36baa-6fcf-48f7-a5a5-0e085089f614 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 22 09:31:01 np0005592159 nova_compute[226433]: 2026-01-22 14:31:01.971 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:31:01 np0005592159 nova_compute[226433]: 2026-01-22 14:31:01.976 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquiring lock "refresh_cache-001ba9a6-ba0c-438d-8150-5cfbcec3d34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:31:01 np0005592159 nova_compute[226433]: 2026-01-22 14:31:01.976 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Acquired lock "refresh_cache-001ba9a6-ba0c-438d-8150-5cfbcec3d34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:31:01 np0005592159 nova_compute[226433]: 2026-01-22 14:31:01.976 226437 DEBUG nova.network.neutron [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.037 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.037 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.040 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.040 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.066 226437 DEBUG nova.compute.manager [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Received event network-changed-ecd36baa-6fcf-48f7-a5a5-0e085089f614 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.067 226437 DEBUG nova.compute.manager [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Refreshing instance network info cache due to event network-changed-ecd36baa-6fcf-48f7-a5a5-0e085089f614. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.067 226437 DEBUG oslo_concurrency.lockutils [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-001ba9a6-ba0c-438d-8150-5cfbcec3d34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:31:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:02.206+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.218 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.221 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4327MB free_disk=20.768470764160156GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.222 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.222 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.309 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.309 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.310 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.310 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.311 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.311 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.311 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 001ba9a6-ba0c-438d-8150-5cfbcec3d34f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.312 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.312 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=20GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:31:02 np0005592159 nova_compute[226433]: 2026-01-22 14:31:02.625 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:31:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:02.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:02.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:02 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:02 np0005592159 ceph-mon[77081]: Health check update: 31 slow ops, oldest one blocked for 3248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:31:03 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/59182803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:31:03 np0005592159 nova_compute[226433]: 2026-01-22 14:31:03.074 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:31:03 np0005592159 nova_compute[226433]: 2026-01-22 14:31:03.081 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:31:03 np0005592159 nova_compute[226433]: 2026-01-22 14:31:03.087 226437 DEBUG nova.network.neutron [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:31:03 np0005592159 nova_compute[226433]: 2026-01-22 14:31:03.111 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:31:03 np0005592159 nova_compute[226433]: 2026-01-22 14:31:03.141 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:31:03 np0005592159 nova_compute[226433]: 2026-01-22 14:31:03.142 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:31:03 np0005592159 nova_compute[226433]: 2026-01-22 14:31:03.143 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:31:03 np0005592159 nova_compute[226433]: 2026-01-22 14:31:03.144 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Jan 22 09:31:03 np0005592159 nova_compute[226433]: 2026-01-22 14:31:03.164 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Jan 22 09:31:03 np0005592159 nova_compute[226433]: 2026-01-22 14:31:03.164 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:31:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:03.255+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:03 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:04.250+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:04.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:04.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:04 np0005592159 nova_compute[226433]: 2026-01-22 14:31:04.974 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:04 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:05 np0005592159 nova_compute[226433]: 2026-01-22 14:31:05.064 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:05.260+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:06 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:06.299+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:06 np0005592159 nova_compute[226433]: 2026-01-22 14:31:06.615 226437 DEBUG nova.network.neutron [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Updating instance_info_cache with network_info: [{"id": "ecd36baa-6fcf-48f7-a5a5-0e085089f614", "address": "fa:16:3e:8c:dd:7e", "network": {"id": "066d4644-87f5-4f3e-abdb-f9409f719569", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1653981788-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06252abc0be74ac08438db3d2f76db14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecd36baa-6f", "ovs_interfaceid": "ecd36baa-6fcf-48f7-a5a5-0e085089f614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:31:06 np0005592159 nova_compute[226433]: 2026-01-22 14:31:06.665 226437 DEBUG oslo_concurrency.lockutils [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] Releasing lock "refresh_cache-001ba9a6-ba0c-438d-8150-5cfbcec3d34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:31:06 np0005592159 nova_compute[226433]: 2026-01-22 14:31:06.665 226437 DEBUG nova.compute.manager [None req-ff247035-7c70-4ffa-9fd3-25fe671e5dd1 112b71a99add4ffeb28392e66d1a3d24 06252abc0be74ac08438db3d2f76db14 - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Instance network_info: |[{"id": "ecd36baa-6fcf-48f7-a5a5-0e085089f614", "address": "fa:16:3e:8c:dd:7e", "network": {"id": "066d4644-87f5-4f3e-abdb-f9409f719569", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1653981788-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06252abc0be74ac08438db3d2f76db14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecd36baa-6f", "ovs_interfaceid": "ecd36baa-6fcf-48f7-a5a5-0e085089f614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:31:06 np0005592159 nova_compute[226433]: 2026-01-22 14:31:06.667 226437 DEBUG oslo_concurrency.lockutils [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-001ba9a6-ba0c-438d-8150-5cfbcec3d34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:31:06 np0005592159 nova_compute[226433]: 2026-01-22 14:31:06.667 226437 DEBUG nova.network.neutron [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Refreshing network info cache for port ecd36baa-6fcf-48f7-a5a5-0e085089f614 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 09:31:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:06.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:06.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:07 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:07.288+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:08 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:08 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3257 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:08 np0005592159 podman[250162]: 2026-01-22 14:31:08.05223005 +0000 UTC m=+0.103302002 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:31:08 np0005592159 nova_compute[226433]: 2026-01-22 14:31:08.280 226437 DEBUG nova.network.neutron [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Updated VIF entry in instance network info cache for port ecd36baa-6fcf-48f7-a5a5-0e085089f614. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 09:31:08 np0005592159 nova_compute[226433]: 2026-01-22 14:31:08.280 226437 DEBUG nova.network.neutron [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Updating instance_info_cache with network_info: [{"id": "ecd36baa-6fcf-48f7-a5a5-0e085089f614", "address": "fa:16:3e:8c:dd:7e", "network": {"id": "066d4644-87f5-4f3e-abdb-f9409f719569", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-1653981788-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06252abc0be74ac08438db3d2f76db14", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapecd36baa-6f", "ovs_interfaceid": "ecd36baa-6fcf-48f7-a5a5-0e085089f614", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:31:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:08.332+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:08 np0005592159 nova_compute[226433]: 2026-01-22 14:31:08.356 226437 DEBUG oslo_concurrency.lockutils [req-c632fa84-bcf2-4964-a131-cc94bdc7155b req-365f93b4-798b-40be-b41e-84ba9152fee4 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-001ba9a6-ba0c-438d-8150-5cfbcec3d34f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:31:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:08.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:09 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:09.284+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:09 np0005592159 nova_compute[226433]: 2026-01-22 14:31:09.977 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:10 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:10 np0005592159 nova_compute[226433]: 2026-01-22 14:31:10.066 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:10.250+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:10.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:10.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:11 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:11.272+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:12.306+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:12 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:12.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:12.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:13.354+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:13 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:14.399+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:14.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:14.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:14 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:15 np0005592159 nova_compute[226433]: 2026-01-22 14:31:15.024 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:15 np0005592159 nova_compute[226433]: 2026-01-22 14:31:15.068 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:15.440+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:15 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:16.457+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:16.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:16.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:16 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:17.418+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:17 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:17 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:18.373+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:18.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:18.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:18 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:19.416+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:19 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:19 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:20 np0005592159 nova_compute[226433]: 2026-01-22 14:31:20.027 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:20 np0005592159 nova_compute[226433]: 2026-01-22 14:31:20.071 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:20.382+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:20.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:20.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:20 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:21.404+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:21 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:22.366+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:22.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:22.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:22 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:23.327+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:23 np0005592159 nova_compute[226433]: 2026-01-22 14:31:23.531 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:31:23 np0005592159 nova_compute[226433]: 2026-01-22 14:31:23.531 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Jan 22 09:31:24 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:24 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:24.288+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:24.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:24.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:25 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:25 np0005592159 nova_compute[226433]: 2026-01-22 14:31:25.067 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:25 np0005592159 nova_compute[226433]: 2026-01-22 14:31:25.072 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:25.256+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:26 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:26.252+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:26.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:26.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:27 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:27.280+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:28 np0005592159 podman[250250]: 2026-01-22 14:31:28.058442255 +0000 UTC m=+0.109381520 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:31:28 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:28.305+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:28.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:28.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:29 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:29 np0005592159 ovn_controller[133156]: 2026-01-22T14:31:29Z|00056|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 22 09:31:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:29.334+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:30 np0005592159 nova_compute[226433]: 2026-01-22 14:31:30.069 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:30 np0005592159 nova_compute[226433]: 2026-01-22 14:31:30.074 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:30 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:30.324+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:30.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:30.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:31 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:31:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:31.310+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 8 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:31:32 np0005592159 ceph-mon[77081]: 8 slow requests (by type [ 'delayed' : 8 ] most affected pool [ 'vms' : 6 ])
Jan 22 09:31:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:32.359+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:32.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:32.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:33 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3282 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:33 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:33.347+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:34 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:34.354+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:34.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:34.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:35 np0005592159 nova_compute[226433]: 2026-01-22 14:31:35.072 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:35 np0005592159 nova_compute[226433]: 2026-01-22 14:31:35.075 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:35 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:35.318+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:36 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:36.356+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:36.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:36.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #106. Immutable memtables: 0.
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.292379) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 106
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297292509, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 2450, "num_deletes": 251, "total_data_size": 4674035, "memory_usage": 4731856, "flush_reason": "Manual Compaction"}
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #107: started
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297316382, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 107, "file_size": 3057164, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53662, "largest_seqno": 56107, "table_properties": {"data_size": 3048175, "index_size": 5163, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23266, "raw_average_key_size": 21, "raw_value_size": 3028198, "raw_average_value_size": 2778, "num_data_blocks": 222, "num_entries": 1090, "num_filter_entries": 1090, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092130, "oldest_key_time": 1769092130, "file_creation_time": 1769092297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 24133 microseconds, and 13205 cpu microseconds.
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.316530) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #107: 3057164 bytes OK
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.316660) [db/memtable_list.cc:519] [default] Level-0 commit table #107 started
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.319011) [db/memtable_list.cc:722] [default] Level-0 commit table #107: memtable #1 done
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.319036) EVENT_LOG_v1 {"time_micros": 1769092297319028, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.319060) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 4662958, prev total WAL file size 4662958, number of live WAL files 2.
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000103.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.321858) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [107(2985KB)], [105(9846KB)]
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297321915, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [107], "files_L6": [105], "score": -1, "input_data_size": 13140142, "oldest_snapshot_seqno": -1}
Jan 22 09:31:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:37.392+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #108: 10189 keys, 11563970 bytes, temperature: kUnknown
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297421109, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 108, "file_size": 11563970, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11504562, "index_size": 32800, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 273383, "raw_average_key_size": 26, "raw_value_size": 11327596, "raw_average_value_size": 1111, "num_data_blocks": 1246, "num_entries": 10189, "num_filter_entries": 10189, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092297, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 108, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.421427) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 11563970 bytes
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.422847) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.4 rd, 116.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 9.6 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 10704, records dropped: 515 output_compression: NoCompression
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.422865) EVENT_LOG_v1 {"time_micros": 1769092297422857, "job": 66, "event": "compaction_finished", "compaction_time_micros": 99264, "compaction_time_cpu_micros": 51502, "output_level": 6, "num_output_files": 1, "total_output_size": 11563970, "num_input_records": 10704, "num_output_records": 10189, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297423607, "job": 66, "event": "table_file_deletion", "file_number": 107}
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000105.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092297425755, "job": 66, "event": "table_file_deletion", "file_number": 105}
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.321747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.425866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.425875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.425878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.425881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:37.425884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:38 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:38 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3287 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:38.364+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:38.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:38.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:39 np0005592159 podman[250274]: 2026-01-22 14:31:39.063859632 +0000 UTC m=+0.114167473 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:31:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:39.316+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:39 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:40 np0005592159 nova_compute[226433]: 2026-01-22 14:31:40.076 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:31:40 np0005592159 nova_compute[226433]: 2026-01-22 14:31:40.077 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:40 np0005592159 nova_compute[226433]: 2026-01-22 14:31:40.078 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:31:40 np0005592159 nova_compute[226433]: 2026-01-22 14:31:40.078 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:31:40 np0005592159 nova_compute[226433]: 2026-01-22 14:31:40.079 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:31:40 np0005592159 nova_compute[226433]: 2026-01-22 14:31:40.081 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:31:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:40.297+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:40 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:40.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:40.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:41.261+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:41 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:42.289+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #109. Immutable memtables: 0.
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.464631) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 109
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302464727, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 349, "num_deletes": 258, "total_data_size": 193899, "memory_usage": 201976, "flush_reason": "Manual Compaction"}
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #110: started
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302468626, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 110, "file_size": 127264, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56112, "largest_seqno": 56456, "table_properties": {"data_size": 125158, "index_size": 270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5510, "raw_average_key_size": 18, "raw_value_size": 120743, "raw_average_value_size": 397, "num_data_blocks": 12, "num_entries": 304, "num_filter_entries": 304, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092298, "oldest_key_time": 1769092298, "file_creation_time": 1769092302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 4024 microseconds, and 1681 cpu microseconds.
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.468672) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #110: 127264 bytes OK
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.468700) [db/memtable_list.cc:519] [default] Level-0 commit table #110 started
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.470763) [db/memtable_list.cc:722] [default] Level-0 commit table #110: memtable #1 done
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.470786) EVENT_LOG_v1 {"time_micros": 1769092302470780, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.470814) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 191450, prev total WAL file size 191450, number of live WAL files 2.
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000106.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.471367) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323630' seq:72057594037927935, type:22 .. '6C6F676D0032353134' seq:0, type:0; will stop at (end)
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [110(124KB)], [108(11MB)]
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302471479, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [110], "files_L6": [108], "score": -1, "input_data_size": 11691234, "oldest_snapshot_seqno": -1}
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #111: 9966 keys, 11552162 bytes, temperature: kUnknown
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302542141, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 111, "file_size": 11552162, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11493797, "index_size": 32333, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24965, "raw_key_size": 269754, "raw_average_key_size": 27, "raw_value_size": 11320164, "raw_average_value_size": 1135, "num_data_blocks": 1223, "num_entries": 9966, "num_filter_entries": 9966, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092302, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 111, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.543603) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 11552162 bytes
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.545531) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.2 rd, 163.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 11.0 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(182.6) write-amplify(90.8) OK, records in: 10493, records dropped: 527 output_compression: NoCompression
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.545590) EVENT_LOG_v1 {"time_micros": 1769092302545565, "job": 68, "event": "compaction_finished", "compaction_time_micros": 70750, "compaction_time_cpu_micros": 33276, "output_level": 6, "num_output_files": 1, "total_output_size": 11552162, "num_input_records": 10493, "num_output_records": 9966, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302545881, "job": 68, "event": "table_file_deletion", "file_number": 110}
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000108.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092302550143, "job": 68, "event": "table_file_deletion", "file_number": 108}
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.471178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.550270) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.550280) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.550283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.550287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:31:42.550290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:31:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:42.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:42.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:43.260+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:43 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:43 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3292 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:44.293+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:44 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:44.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:31:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:44.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:31:45 np0005592159 nova_compute[226433]: 2026-01-22 14:31:45.083 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:31:45 np0005592159 nova_compute[226433]: 2026-01-22 14:31:45.085 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:31:45 np0005592159 nova_compute[226433]: 2026-01-22 14:31:45.085 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:31:45 np0005592159 nova_compute[226433]: 2026-01-22 14:31:45.085 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:31:45 np0005592159 nova_compute[226433]: 2026-01-22 14:31:45.118 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:45 np0005592159 nova_compute[226433]: 2026-01-22 14:31:45.118 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:31:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:45.338+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:45 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:46.355+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:46 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:46.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:46.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:31:47.207 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:31:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:31:47.207 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:31:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:31:47.207 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:31:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:47.308+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:47 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:31:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:31:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:48.279+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:48 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:48 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3297 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:48.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:48.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:49.317+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:49 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:50 np0005592159 nova_compute[226433]: 2026-01-22 14:31:50.119 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:50 np0005592159 nova_compute[226433]: 2026-01-22 14:31:50.120 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:31:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:50.281+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:50 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:50.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:50.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:51.313+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:51 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:51 np0005592159 nova_compute[226433]: 2026-01-22 14:31:51.530 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:31:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:52.347+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:52 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:52.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:52.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:53.326+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:53 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:53 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:31:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:54.286+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:54 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:54.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:54.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:55 np0005592159 nova_compute[226433]: 2026-01-22 14:31:55.122 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:31:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:55.321+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:55 np0005592159 nova_compute[226433]: 2026-01-22 14:31:55.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:31:55 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:55 np0005592159 nova_compute[226433]: 2026-01-22 14:31:55.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:31:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:56.344+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:56 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:56.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:56.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:57.322+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:57 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:58.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:58 np0005592159 nova_compute[226433]: 2026-01-22 14:31:58.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:31:58 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:31:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:31:58.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:31:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:31:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:31:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:31:58.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:31:59 np0005592159 podman[250541]: 2026-01-22 14:31:59.034623676 +0000 UTC m=+0.082433781 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 09:31:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:31:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:31:59.356+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:31:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:59 np0005592159 nova_compute[226433]: 2026-01-22 14:31:59.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:31:59 np0005592159 nova_compute[226433]: 2026-01-22 14:31:59.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:31:59 np0005592159 nova_compute[226433]: 2026-01-22 14:31:59.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:31:59 np0005592159 nova_compute[226433]: 2026-01-22 14:31:59.561 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:31:59 np0005592159 nova_compute[226433]: 2026-01-22 14:31:59.562 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:31:59 np0005592159 nova_compute[226433]: 2026-01-22 14:31:59.563 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:31:59 np0005592159 nova_compute[226433]: 2026-01-22 14:31:59.563 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:31:59 np0005592159 nova_compute[226433]: 2026-01-22 14:31:59.564 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:31:59 np0005592159 nova_compute[226433]: 2026-01-22 14:31:59.564 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:31:59 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:31:59 np0005592159 nova_compute[226433]: 2026-01-22 14:31:59.723 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:31:59 np0005592159 nova_compute[226433]: 2026-01-22 14:31:59.723 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:31:59 np0005592159 nova_compute[226433]: 2026-01-22 14:31:59.724 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:31:59 np0005592159 nova_compute[226433]: 2026-01-22 14:31:59.724 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:32:00 np0005592159 nova_compute[226433]: 2026-01-22 14:32:00.123 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:32:00 np0005592159 nova_compute[226433]: 2026-01-22 14:32:00.217 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:32:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:00.332+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:00 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:00.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:00.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:01 np0005592159 nova_compute[226433]: 2026-01-22 14:32:01.137 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:32:01 np0005592159 nova_compute[226433]: 2026-01-22 14:32:01.252 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:32:01 np0005592159 nova_compute[226433]: 2026-01-22 14:32:01.252 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:32:01 np0005592159 nova_compute[226433]: 2026-01-22 14:32:01.253 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:32:01 np0005592159 nova_compute[226433]: 2026-01-22 14:32:01.253 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:32:01 np0005592159 nova_compute[226433]: 2026-01-22 14:32:01.254 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:32:01 np0005592159 nova_compute[226433]: 2026-01-22 14:32:01.254 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:32:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:01.381+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:01 np0005592159 nova_compute[226433]: 2026-01-22 14:32:01.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:32:01 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:01 np0005592159 nova_compute[226433]: 2026-01-22 14:32:01.698 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:32:01 np0005592159 nova_compute[226433]: 2026-01-22 14:32:01.699 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:32:01 np0005592159 nova_compute[226433]: 2026-01-22 14:32:01.700 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:32:01 np0005592159 nova_compute[226433]: 2026-01-22 14:32:01.700 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:32:01 np0005592159 nova_compute[226433]: 2026-01-22 14:32:01.701 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:32:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:32:02 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4247879608' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.141 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.315 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.316 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.320 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.320 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:32:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:02.336+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.555 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.557 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4330MB free_disk=20.73322296142578GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.558 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.558 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:32:02 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:02 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.781 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.782 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.782 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.783 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.783 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.783 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.784 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 001ba9a6-ba0c-438d-8150-5cfbcec3d34f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.784 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 7 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.785 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1408MB phys_disk=20GB used_disk=7GB total_vcpus=8 used_vcpus=7 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.809 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing inventories for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.835 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating ProviderTree inventory for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.835 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Updating inventory in ProviderTree for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Jan 22 09:32:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:02.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:02.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.851 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing aggregate associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Jan 22 09:32:02 np0005592159 nova_compute[226433]: 2026-01-22 14:32:02.873 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Refreshing trait associations for resource provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc, traits: COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE2,COMPUTE_SECURITY_TPM_2_0,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_EXTEND,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_MMX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_STORAGE_BUS_FDC _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Jan 22 09:32:03 np0005592159 nova_compute[226433]: 2026-01-22 14:32:03.036 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:32:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:03.380+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:32:03 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1801557705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:32:03 np0005592159 nova_compute[226433]: 2026-01-22 14:32:03.488 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:32:03 np0005592159 nova_compute[226433]: 2026-01-22 14:32:03.497 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:32:03 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:03 np0005592159 nova_compute[226433]: 2026-01-22 14:32:03.649 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:32:03 np0005592159 nova_compute[226433]: 2026-01-22 14:32:03.652 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:32:03 np0005592159 nova_compute[226433]: 2026-01-22 14:32:03.653 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.095s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:32:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:04.389+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:04 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:04.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:04.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:05 np0005592159 nova_compute[226433]: 2026-01-22 14:32:05.126 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:32:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:05.373+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:05 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:06.363+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:06 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:06.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:06.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:07.380+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:07 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:07 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:08.406+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:08 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:08.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:08.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:09.391+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:09 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:10 np0005592159 podman[250660]: 2026-01-22 14:32:10.09729533 +0000 UTC m=+0.144210473 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:32:10 np0005592159 nova_compute[226433]: 2026-01-22 14:32:10.127 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:32:10 np0005592159 nova_compute[226433]: 2026-01-22 14:32:10.129 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:32:10 np0005592159 nova_compute[226433]: 2026-01-22 14:32:10.129 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:32:10 np0005592159 nova_compute[226433]: 2026-01-22 14:32:10.129 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:32:10 np0005592159 nova_compute[226433]: 2026-01-22 14:32:10.130 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:32:10 np0005592159 nova_compute[226433]: 2026-01-22 14:32:10.130 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:32:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:10.352+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:10 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:10.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:10.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:11.317+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:11 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:12.332+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:12 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:12.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:12.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:13.311+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:13 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:13 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:14.331+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:14 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:14.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:14.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:15 np0005592159 nova_compute[226433]: 2026-01-22 14:32:15.131 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:32:15 np0005592159 nova_compute[226433]: 2026-01-22 14:32:15.133 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:32:15 np0005592159 nova_compute[226433]: 2026-01-22 14:32:15.133 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:32:15 np0005592159 nova_compute[226433]: 2026-01-22 14:32:15.133 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:32:15 np0005592159 nova_compute[226433]: 2026-01-22 14:32:15.150 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:32:15 np0005592159 nova_compute[226433]: 2026-01-22 14:32:15.151 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:32:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:15.361+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:15 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:16.342+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:16 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:16.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:16.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:17.353+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:17 np0005592159 nova_compute[226433]: 2026-01-22 14:32:17.649 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:32:17 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:32:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/458429548' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:32:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:32:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/458429548' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:32:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:18.381+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:18 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:18.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:18.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:19.375+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:19 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:20 np0005592159 nova_compute[226433]: 2026-01-22 14:32:20.151 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:32:20 np0005592159 nova_compute[226433]: 2026-01-22 14:32:20.153 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:32:20 np0005592159 nova_compute[226433]: 2026-01-22 14:32:20.154 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:32:20 np0005592159 nova_compute[226433]: 2026-01-22 14:32:20.154 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:32:20 np0005592159 nova_compute[226433]: 2026-01-22 14:32:20.187 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:32:20 np0005592159 nova_compute[226433]: 2026-01-22 14:32:20.187 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:32:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:20.406+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:20.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:20.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:20 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:21.428+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:22 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:22 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:22.412+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:22.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:22.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:23 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:23 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:23.438+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:24 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:24.483+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:24.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:24.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:25 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:25 np0005592159 nova_compute[226433]: 2026-01-22 14:32:25.189 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:32:25 np0005592159 nova_compute[226433]: 2026-01-22 14:32:25.191 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:32:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:25.452+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:26 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:26.418+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:26.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:26.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:27 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:27.374+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:27 np0005592159 nova_compute[226433]: 2026-01-22 14:32:27.870 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:32:27 np0005592159 nova_compute[226433]: 2026-01-22 14:32:27.870 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:32:27 np0005592159 nova_compute[226433]: 2026-01-22 14:32:27.885 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:32:27 np0005592159 nova_compute[226433]: 2026-01-22 14:32:27.954 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:32:27 np0005592159 nova_compute[226433]: 2026-01-22 14:32:27.954 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:32:27 np0005592159 nova_compute[226433]: 2026-01-22 14:32:27.960 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:32:27 np0005592159 nova_compute[226433]: 2026-01-22 14:32:27.960 226437 INFO nova.compute.claims [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Claim successful on node compute-2.ctlplane.example.com#033[00m
Jan 22 09:32:28 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:28 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3337 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.225 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:32:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:28.397+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:32:28 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3871583424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.641 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.646 226437 DEBUG nova.compute.provider_tree [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.660 226437 DEBUG nova.scheduler.client.report [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.680 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.681 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.724 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.724 226437 DEBUG nova.network.neutron [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.750 226437 INFO nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.775 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:32:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e4ed6f0 =====
Jan 22 09:32:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e4ed6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:28 np0005592159 radosgw[80769]: beast: 0x7f935e4ed6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.884 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.885 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.885 226437 INFO nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Creating image(s)#033[00m
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.915 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.944 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.973 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:32:28 np0005592159 nova_compute[226433]: 2026-01-22 14:32:28.978 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.044 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.046 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "389efd6047b99779d5161939afa4f2bdb261bfd0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.046 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.047 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "389efd6047b99779d5161939afa4f2bdb261bfd0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.076 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.081 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:32:29 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.238 226437 DEBUG nova.network.neutron [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.239 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.357 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/389efd6047b99779d5161939afa4f2bdb261bfd0 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:32:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:29.396+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.452 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] resizing rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.566 226437 DEBUG nova.objects.instance [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'migration_context' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.588 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.588 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Ensure instance console log exists: /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.589 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.590 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.590 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.592 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.596 226437 WARNING nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.602 226437 DEBUG nova.virt.libvirt.host [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.604 226437 DEBUG nova.virt.libvirt.host [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.609 226437 DEBUG nova.virt.libvirt.host [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.610 226437 DEBUG nova.virt.libvirt.host [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.613 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.613 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T14:32:25Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='25a58c00-ff14-4ac2-b88f-b2e5060d0aa8',id=28,is_public=True,memory_mb=128,name='tempest-test_resize_flavor_-144408879',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.614 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.614 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.615 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.615 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.615 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.615 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.616 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.616 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.616 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.617 226437 DEBUG nova.virt.hardware [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 09:32:29 np0005592159 nova_compute[226433]: 2026-01-22 14:32:29.620 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:32:30 np0005592159 podman[250955]: 2026-01-22 14:32:30.026754315 +0000 UTC m=+0.078605511 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:32:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:32:30 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1888876612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:32:30 np0005592159 nova_compute[226433]: 2026-01-22 14:32:30.059 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:32:30 np0005592159 nova_compute[226433]: 2026-01-22 14:32:30.088 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:32:30 np0005592159 nova_compute[226433]: 2026-01-22 14:32:30.092 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:32:30 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:30 np0005592159 nova_compute[226433]: 2026-01-22 14:32:30.191 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:32:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:30.429+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:32:30 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2091226058' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:32:30 np0005592159 nova_compute[226433]: 2026-01-22 14:32:30.573 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:32:30 np0005592159 nova_compute[226433]: 2026-01-22 14:32:30.575 226437 DEBUG nova.objects.instance [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'pci_devices' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:32:30 np0005592159 nova_compute[226433]: 2026-01-22 14:32:30.599 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] End _get_guest_xml xml=<domain type="kvm">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  <uuid>33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4</uuid>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  <name>instance-00000015</name>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  <memory>131072</memory>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  <vcpu>1</vcpu>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  <metadata>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <nova:name>tempest-MigrationsAdminTest-server-685681022</nova:name>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <nova:creationTime>2026-01-22 14:32:29</nova:creationTime>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <nova:flavor name="tempest-test_resize_flavor_-144408879">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <nova:memory>128</nova:memory>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <nova:disk>1</nova:disk>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <nova:swap>0</nova:swap>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <nova:vcpus>1</nova:vcpus>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      </nova:flavor>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <nova:owner>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <nova:user uuid="549def9aedaa41be8d41ae7c6e534303">tempest-MigrationsAdminTest-775661994-project-member</nova:user>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <nova:project uuid="98a3ce5a8a524b0d8327784d9df9a9db">tempest-MigrationsAdminTest-775661994</nova:project>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      </nova:owner>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <nova:ports/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    </nova:instance>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  </metadata>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  <sysinfo type="smbios">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <system>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <entry name="manufacturer">RDO</entry>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <entry name="product">OpenStack Compute</entry>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <entry name="serial">33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4</entry>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <entry name="uuid">33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4</entry>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <entry name="family">Virtual Machine</entry>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    </system>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  </sysinfo>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  <os>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <boot dev="hd"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <smbios mode="sysinfo"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  </os>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  <features>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <acpi/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <apic/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <vmcoreinfo/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  </features>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  <clock offset="utc">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <timer name="hpet" present="no"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  </clock>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  <cpu mode="custom" match="exact">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <model>Nehalem</model>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  </cpu>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  <devices>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <disk type="network" device="disk">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <driver type="raw" cache="none"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <source protocol="rbd" name="vms/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      </source>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <auth username="openstack">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      </auth>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <target dev="vda" bus="virtio"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    </disk>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <disk type="network" device="cdrom">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <driver type="raw" cache="none"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <source protocol="rbd" name="vms/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk.config">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      </source>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <auth username="openstack">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      </auth>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <target dev="sda" bus="sata"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    </disk>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <serial type="pty">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <log file="/var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/console.log" append="off"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    </serial>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <video>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <model type="virtio"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    </video>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <input type="tablet" bus="usb"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <rng model="virtio">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <backend model="random">/dev/urandom</backend>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    </rng>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <controller type="usb" index="0"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    <memballoon model="virtio">
Jan 22 09:32:30 np0005592159 nova_compute[226433]:      <stats period="10"/>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:    </memballoon>
Jan 22 09:32:30 np0005592159 nova_compute[226433]:  </devices>
Jan 22 09:32:30 np0005592159 nova_compute[226433]: </domain>
Jan 22 09:32:30 np0005592159 nova_compute[226433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 09:32:30 np0005592159 nova_compute[226433]: 2026-01-22 14:32:30.650 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:32:30 np0005592159 nova_compute[226433]: 2026-01-22 14:32:30.650 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:32:30 np0005592159 nova_compute[226433]: 2026-01-22 14:32:30.651 226437 INFO nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Using config drive#033[00m
Jan 22 09:32:30 np0005592159 nova_compute[226433]: 2026-01-22 14:32:30.870 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:32:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e4ed6f0 =====
Jan 22 09:32:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:30.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e4ed6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:30 np0005592159 radosgw[80769]: beast: 0x7f935e4ed6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:30.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:31 np0005592159 nova_compute[226433]: 2026-01-22 14:32:31.148 226437 INFO nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Creating config drive at /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/disk.config#033[00m
Jan 22 09:32:31 np0005592159 nova_compute[226433]: 2026-01-22 14:32:31.154 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn78vci41 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:32:31 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:31 np0005592159 nova_compute[226433]: 2026-01-22 14:32:31.281 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn78vci41" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:32:31 np0005592159 nova_compute[226433]: 2026-01-22 14:32:31.311 226437 DEBUG nova.storage.rbd_utils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rbd image 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:32:31 np0005592159 nova_compute[226433]: 2026-01-22 14:32:31.316 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/disk.config 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:32:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:31.469+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:31 np0005592159 nova_compute[226433]: 2026-01-22 14:32:31.478 226437 DEBUG oslo_concurrency.processutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/disk.config 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:32:31 np0005592159 nova_compute[226433]: 2026-01-22 14:32:31.479 226437 INFO nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Deleting local config drive /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/disk.config because it was imported into RBD.#033[00m
Jan 22 09:32:31 np0005592159 systemd-machined[194970]: New machine qemu-5-instance-00000015.
Jan 22 09:32:31 np0005592159 systemd[1]: Started Virtual Machine qemu-5-instance-00000015.
Jan 22 09:32:32 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:32.506+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.581 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092352.580007, 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.581 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] VM Resumed (Lifecycle Event)#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.584 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.585 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.588 226437 INFO nova.virt.libvirt.driver [-] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance spawned successfully.#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.589 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.612 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.620 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.621 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.622 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.622 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.623 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.623 226437 DEBUG nova.virt.libvirt.driver [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.629 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.664 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.665 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092352.5834947, 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.665 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] VM Started (Lifecycle Event)#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.687 226437 INFO nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Took 3.80 seconds to spawn the instance on the hypervisor.#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.688 226437 DEBUG nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.690 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.699 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.763 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.791 226437 INFO nova.compute.manager [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Took 4.86 seconds to build instance.#033[00m
Jan 22 09:32:32 np0005592159 nova_compute[226433]: 2026-01-22 14:32:32.808 226437 DEBUG oslo_concurrency.lockutils [None req-1ce183ee-531b-445c-91b0-2605e87a9476 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:32:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e4ed6f0 =====
Jan 22 09:32:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e4ed6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:32.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:32 np0005592159 radosgw[80769]: beast: 0x7f935e4ed6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:32.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:33 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3342 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:33 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:33.536+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:34 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:34.502+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:34.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:32:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:34.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:32:35 np0005592159 nova_compute[226433]: 2026-01-22 14:32:35.194 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:32:35 np0005592159 nova_compute[226433]: 2026-01-22 14:32:35.196 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:32:35 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:35.468+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:36 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:36.484+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:36.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:36.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:37 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:37.533+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:38 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:38 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:38.493+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:38.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:38.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:38 np0005592159 nova_compute[226433]: 2026-01-22 14:32:38.936 226437 DEBUG nova.compute.manager [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Stashing vm_state: active _prep_resize /usr/lib/python3.9/site-packages/nova/compute/manager.py:5560#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.033 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.034 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.074 226437 DEBUG nova.objects.instance [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'pci_requests' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.095 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.096 226437 INFO nova.compute.claims [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Claim successful on node compute-2.ctlplane.example.com#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.096 226437 DEBUG nova.objects.instance [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'resources' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.113 226437 DEBUG nova.objects.instance [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'pci_devices' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.178 226437 INFO nova.compute.resource_tracker [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating resource usage from migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0#033[00m
Jan 22 09:32:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.379 226437 DEBUG oslo_concurrency.processutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:32:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:39.454+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:39 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:32:39 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3676994071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.841 226437 DEBUG oslo_concurrency.processutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.851 226437 DEBUG nova.compute.provider_tree [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.878 226437 DEBUG nova.scheduler.client.report [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.928 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.resize_claim" :: held 0.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.929 226437 INFO nova.compute.manager [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Migrating#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.977 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.977 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquired lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:32:39 np0005592159 nova_compute[226433]: 2026-01-22 14:32:39.978 226437 DEBUG nova.network.neutron [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:32:40 np0005592159 nova_compute[226433]: 2026-01-22 14:32:40.184 226437 DEBUG nova.network.neutron [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:32:40 np0005592159 nova_compute[226433]: 2026-01-22 14:32:40.197 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:32:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:40.416+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:40 np0005592159 nova_compute[226433]: 2026-01-22 14:32:40.546 226437 DEBUG nova.network.neutron [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:32:40 np0005592159 nova_compute[226433]: 2026-01-22 14:32:40.569 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Releasing lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:32:40 np0005592159 nova_compute[226433]: 2026-01-22 14:32:40.662 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Starting migrate_disk_and_power_off migrate_disk_and_power_off /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11511#033[00m
Jan 22 09:32:40 np0005592159 nova_compute[226433]: 2026-01-22 14:32:40.666 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Jan 22 09:32:40 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:40.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:40.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:41 np0005592159 podman[251209]: 2026-01-22 14:32:41.131598267 +0000 UTC m=+0.179978263 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:32:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:41.430+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:41 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:42.426+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:42 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:42.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:42.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:43.440+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:43 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:43 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 3352 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:44.471+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 09:32:44 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:44.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:44.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:45 np0005592159 nova_compute[226433]: 2026-01-22 14:32:45.198 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:32:45 np0005592159 nova_compute[226433]: 2026-01-22 14:32:45.199 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:32:45 np0005592159 nova_compute[226433]: 2026-01-22 14:32:45.199 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:32:45 np0005592159 nova_compute[226433]: 2026-01-22 14:32:45.200 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:32:45 np0005592159 nova_compute[226433]: 2026-01-22 14:32:45.200 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:32:45 np0005592159 nova_compute[226433]: 2026-01-22 14:32:45.201 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:32:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:45.471+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:45 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 09:32:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:46.500+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:46.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:46.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:46 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:32:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:32:47.208 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:32:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:32:47.208 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:32:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:32:47.209 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:32:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:47.541+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:47 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:48.551+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:48.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:48.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:48 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:32:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:32:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:32:48 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:49.532+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:49 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:50 np0005592159 nova_compute[226433]: 2026-01-22 14:32:50.201 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:32:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:50.505+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:50 np0005592159 nova_compute[226433]: 2026-01-22 14:32:50.718 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 22 09:32:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:50.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:50.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:50 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:51.460+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:51 np0005592159 nova_compute[226433]: 2026-01-22 14:32:51.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:32:51 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:52.506+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:32:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:52.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:32:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:52.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:52 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 3357 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:52 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:53.541+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:54 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:54.560+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:54.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:54.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:32:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:32:55 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:55 np0005592159 nova_compute[226433]: 2026-01-22 14:32:55.203 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:32:55 np0005592159 nova_compute[226433]: 2026-01-22 14:32:55.204 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:32:55 np0005592159 nova_compute[226433]: 2026-01-22 14:32:55.204 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:32:55 np0005592159 nova_compute[226433]: 2026-01-22 14:32:55.204 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:32:55 np0005592159 nova_compute[226433]: 2026-01-22 14:32:55.205 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:32:55 np0005592159 nova_compute[226433]: 2026-01-22 14:32:55.206 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:32:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:55.549+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:56 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:56 np0005592159 nova_compute[226433]: 2026-01-22 14:32:56.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:32:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:56.549+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:56.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:56.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #112. Immutable memtables: 0.
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.483246) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 112
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377483281, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1261, "num_deletes": 252, "total_data_size": 2149753, "memory_usage": 2179112, "flush_reason": "Manual Compaction"}
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #113: started
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377490054, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 113, "file_size": 920251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56461, "largest_seqno": 57717, "table_properties": {"data_size": 916035, "index_size": 1612, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13120, "raw_average_key_size": 21, "raw_value_size": 906101, "raw_average_value_size": 1487, "num_data_blocks": 70, "num_entries": 609, "num_filter_entries": 609, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092303, "oldest_key_time": 1769092303, "file_creation_time": 1769092377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 6834 microseconds, and 3091 cpu microseconds.
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.490085) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #113: 920251 bytes OK
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.490100) [db/memtable_list.cc:519] [default] Level-0 commit table #113 started
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.492394) [db/memtable_list.cc:722] [default] Level-0 commit table #113: memtable #1 done
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.492406) EVENT_LOG_v1 {"time_micros": 1769092377492403, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.492423) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 2143590, prev total WAL file size 2143590, number of live WAL files 2.
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000109.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.493087) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373538' seq:0, type:0; will stop at (end)
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [113(898KB)], [111(11MB)]
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377493118, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [113], "files_L6": [111], "score": -1, "input_data_size": 12472413, "oldest_snapshot_seqno": -1}
Jan 22 09:32:57 np0005592159 nova_compute[226433]: 2026-01-22 14:32:57.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #114: 10089 keys, 9037738 bytes, temperature: kUnknown
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377544652, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 114, "file_size": 9037738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8982706, "index_size": 28680, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25285, "raw_key_size": 272986, "raw_average_key_size": 27, "raw_value_size": 8811056, "raw_average_value_size": 873, "num_data_blocks": 1070, "num_entries": 10089, "num_filter_entries": 10089, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092377, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 114, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.544877) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 9037738 bytes
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.545998) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 241.7 rd, 175.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 11.0 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(23.4) write-amplify(9.8) OK, records in: 10575, records dropped: 486 output_compression: NoCompression
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.546025) EVENT_LOG_v1 {"time_micros": 1769092377546011, "job": 70, "event": "compaction_finished", "compaction_time_micros": 51610, "compaction_time_cpu_micros": 23250, "output_level": 6, "num_output_files": 1, "total_output_size": 9037738, "num_input_records": 10575, "num_output_records": 10089, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377546415, "job": 70, "event": "table_file_deletion", "file_number": 113}
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000111.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092377549223, "job": 70, "event": "table_file_deletion", "file_number": 111}
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.492990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.549379) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.549384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.549386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.549388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:32:57.549390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:32:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:57.552+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:58 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:32:58 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:58.584+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:32:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:32:58.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:32:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:32:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:32:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:32:58.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:32:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:32:59 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:59 np0005592159 nova_compute[226433]: 2026-01-22 14:32:59.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:32:59 np0005592159 nova_compute[226433]: 2026-01-22 14:32:59.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:32:59 np0005592159 nova_compute[226433]: 2026-01-22 14:32:59.517 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Jan 22 09:32:59 np0005592159 nova_compute[226433]: 2026-01-22 14:32:59.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: e0e74330-96df-479f-8baf-53fbd2ccba91] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:32:59 np0005592159 nova_compute[226433]: 2026-01-22 14:32:59.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: f591d61b-712e-49aa-85bd-8d222b607eb3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:32:59 np0005592159 nova_compute[226433]: 2026-01-22 14:32:59.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 87e798e6-6f00-4fe1-8412-75ddc9e2878e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:32:59 np0005592159 nova_compute[226433]: 2026-01-22 14:32:59.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8331b067-1b3f-4a1d-a596-e966f6de776a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:32:59 np0005592159 nova_compute[226433]: 2026-01-22 14:32:59.546 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: a0b3924b-4422-47c5-ba40-748e41b14d00] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:32:59 np0005592159 nova_compute[226433]: 2026-01-22 14:32:59.547 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 001ba9a6-ba0c-438d-8150-5cfbcec3d34f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Jan 22 09:32:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:32:59.560+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:59 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:32:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:32:59 np0005592159 nova_compute[226433]: 2026-01-22 14:32:59.772 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:32:59 np0005592159 nova_compute[226433]: 2026-01-22 14:32:59.774 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:32:59 np0005592159 nova_compute[226433]: 2026-01-22 14:32:59.775 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:32:59 np0005592159 nova_compute[226433]: 2026-01-22 14:32:59.776 226437 DEBUG nova.objects.instance [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lazy-loading 'info_cache' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:33:00 np0005592159 nova_compute[226433]: 2026-01-22 14:33:00.012 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:33:00 np0005592159 nova_compute[226433]: 2026-01-22 14:33:00.203 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:33:00 np0005592159 nova_compute[226433]: 2026-01-22 14:33:00.206 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:33:00 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:00 np0005592159 nova_compute[226433]: 2026-01-22 14:33:00.221 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:33:00 np0005592159 nova_compute[226433]: 2026-01-22 14:33:00.222 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:33:00 np0005592159 nova_compute[226433]: 2026-01-22 14:33:00.223 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:33:00 np0005592159 podman[251478]: 2026-01-22 14:33:00.247641515 +0000 UTC m=+0.092571963 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 09:33:00 np0005592159 nova_compute[226433]: 2026-01-22 14:33:00.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:33:00 np0005592159 nova_compute[226433]: 2026-01-22 14:33:00.517 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:33:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:00.604+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:00 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:00.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:33:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:00.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:33:01 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:01 np0005592159 nova_compute[226433]: 2026-01-22 14:33:01.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:33:01 np0005592159 nova_compute[226433]: 2026-01-22 14:33:01.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:33:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:01.576+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:01 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:01 np0005592159 nova_compute[226433]: 2026-01-22 14:33:01.764 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 22 09:33:02 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:02.573+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:02 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:33:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:02.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:33:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:33:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:02.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:33:03 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:03 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.543 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.543 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.544 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.544 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.544 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:33:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:03.614+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:03 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.746 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "8e98e700-52a4-44ff-8e11-9404cd11d871" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.747 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.747 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "8e98e700-52a4-44ff-8e11-9404cd11d871-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.748 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.748 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.751 226437 INFO nova.compute.manager [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Terminating instance#033[00m
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.752 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.753 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquired lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:33:03 np0005592159 nova_compute[226433]: 2026-01-22 14:33:03.753 226437 DEBUG nova.network.neutron [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:33:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:33:03 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/433948236' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.011 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.090 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.090 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.093 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.093 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.096 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.096 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.159 226437 DEBUG nova.network.neutron [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:33:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.248 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.249 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4142MB free_disk=20.68789291381836GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.249 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.249 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.322 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Applying migration context for instance 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 as it has an incoming, in-progress migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0. Migration status is migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.323 226437 INFO nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating resource usage from migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.353 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 001ba9a6-ba0c-438d-8150-5cfbcec3d34f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.353 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8e98e700-52a4-44ff-8e11-9404cd11d871 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.354 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.354 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.354 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.354 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.354 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.355 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.355 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.355 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 9 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.355 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1728MB phys_disk=20GB used_disk=9GB total_vcpus=8 used_vcpus=9 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:33:04 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.508 226437 DEBUG nova.network.neutron [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.530 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Releasing lock "refresh_cache-8e98e700-52a4-44ff-8e11-9404cd11d871" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.531 226437 DEBUG nova.compute.manager [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Jan 22 09:33:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:04.566+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:04 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:04 np0005592159 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 22 09:33:04 np0005592159 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d0000000d.scope: Consumed 41.112s CPU time.
Jan 22 09:33:04 np0005592159 systemd-machined[194970]: Machine qemu-3-instance-0000000d terminated.
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.689 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.756 226437 INFO nova.virt.libvirt.driver [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance destroyed successfully.#033[00m
Jan 22 09:33:04 np0005592159 nova_compute[226433]: 2026-01-22 14:33:04.756 226437 DEBUG nova.objects.instance [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lazy-loading 'resources' on Instance uuid 8e98e700-52a4-44ff-8e11-9404cd11d871 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:33:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:04.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:04.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:33:05 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/33695955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.136 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.144 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.170 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.192 226437 INFO nova.virt.libvirt.driver [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Deleting instance files /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871_del#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.192 226437 INFO nova.virt.libvirt.driver [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Deletion of /var/lib/nova/instances/8e98e700-52a4-44ff-8e11-9404cd11d871_del complete#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.196 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.196 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.208 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.245 226437 INFO nova.compute.manager [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.246 226437 DEBUG oslo.service.loopingcall [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.247 226437 DEBUG nova.compute.manager [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.247 226437 DEBUG nova.network.neutron [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Jan 22 09:33:05 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.404 226437 DEBUG nova.network.neutron [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.418 226437 DEBUG nova.network.neutron [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.434 226437 INFO nova.compute.manager [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Took 0.19 seconds to deallocate network for instance.#033[00m
Jan 22 09:33:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:05.530+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:05 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.553 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.553 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:33:05 np0005592159 nova_compute[226433]: 2026-01-22 14:33:05.786 226437 DEBUG oslo_concurrency.processutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:33:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:33:06 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1940242111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:33:06 np0005592159 nova_compute[226433]: 2026-01-22 14:33:06.203 226437 DEBUG oslo_concurrency.processutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:33:06 np0005592159 nova_compute[226433]: 2026-01-22 14:33:06.210 226437 DEBUG nova.compute.provider_tree [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:33:06 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:06.531+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:06 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:06 np0005592159 nova_compute[226433]: 2026-01-22 14:33:06.550 226437 DEBUG nova.scheduler.client.report [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:33:06 np0005592159 nova_compute[226433]: 2026-01-22 14:33:06.581 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.028s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:33:06 np0005592159 nova_compute[226433]: 2026-01-22 14:33:06.624 226437 INFO nova.scheduler.client.report [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Deleted allocations for instance 8e98e700-52a4-44ff-8e11-9404cd11d871#033[00m
Jan 22 09:33:06 np0005592159 nova_compute[226433]: 2026-01-22 14:33:06.705 226437 DEBUG oslo_concurrency.lockutils [None req-eb328021-1d68-409c-abd8-775a4ce8fcb4 a5be1e8103e142238ae4c912393095c4 688eff2d52114848b8ae16c9cfaa49d9 - - default default] Lock "8e98e700-52a4-44ff-8e11-9404cd11d871" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:33:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:33:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:06.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:33:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:33:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:06.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:33:07 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:07.580+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:07 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:08 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:08 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:08.551+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:08 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:33:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:08.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:33:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:33:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:08.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:33:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:09 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:33:09.438 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:33:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:33:09.439 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:33:09 np0005592159 nova_compute[226433]: 2026-01-22 14:33:09.440 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:33:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:33:09.440 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:33:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:09.598+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:09 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:10 np0005592159 nova_compute[226433]: 2026-01-22 14:33:10.209 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:33:10 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:10.558+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:10 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:10.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:10.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:11 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:11.530+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:11 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:12 np0005592159 podman[251593]: 2026-01-22 14:33:12.072339107 +0000 UTC m=+0.126390900 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 09:33:12 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:12.568+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:12 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:12 np0005592159 nova_compute[226433]: 2026-01-22 14:33:12.844 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance in state 1 after 32 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 22 09:33:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:12.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:12.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:13 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:13 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:13.611+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:13 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:14.565+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:14 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:14 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:14.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:14.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:15 np0005592159 nova_compute[226433]: 2026-01-22 14:33:15.212 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:33:15 np0005592159 nova_compute[226433]: 2026-01-22 14:33:15.214 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:33:15 np0005592159 nova_compute[226433]: 2026-01-22 14:33:15.214 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:33:15 np0005592159 nova_compute[226433]: 2026-01-22 14:33:15.214 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:33:15 np0005592159 nova_compute[226433]: 2026-01-22 14:33:15.219 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:33:15 np0005592159 nova_compute[226433]: 2026-01-22 14:33:15.219 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:33:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:15.569+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:15 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:15 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:16.546+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:16 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:16 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 09:33:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:16.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:16.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:17.540+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:17 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:17 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:17 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 3388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:18.535+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:18 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:18 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:18.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 09:33:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:18.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 09:33:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:19.523+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:19 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:19 np0005592159 nova_compute[226433]: 2026-01-22 14:33:19.754 226437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769092384.7511826, 8e98e700-52a4-44ff-8e11-9404cd11d871 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:33:19 np0005592159 nova_compute[226433]: 2026-01-22 14:33:19.754 226437 INFO nova.compute.manager [-] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] VM Stopped (Lifecycle Event)#033[00m
Jan 22 09:33:19 np0005592159 nova_compute[226433]: 2026-01-22 14:33:19.795 226437 DEBUG nova.compute.manager [None req-70e5a390-06c0-4aeb-b707-d4a109a305fd - - - - - -] [instance: 8e98e700-52a4-44ff-8e11-9404cd11d871] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:33:19 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:20 np0005592159 nova_compute[226433]: 2026-01-22 14:33:20.220 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:33:20 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:20.478+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:20 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 09:33:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:20.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 09:33:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:20.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:21.463+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:21 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:21 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:22.500+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:22 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:22 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:22.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 09:33:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:22.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 09:33:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:23.543+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:23 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:23 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:23 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:23 np0005592159 nova_compute[226433]: 2026-01-22 14:33:23.899 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance in state 1 after 43 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 22 09:33:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:24.517+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:24 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:24 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:24.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 09:33:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:24.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 09:33:25 np0005592159 nova_compute[226433]: 2026-01-22 14:33:25.221 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:33:25 np0005592159 nova_compute[226433]: 2026-01-22 14:33:25.223 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:33:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:25.492+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:25 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:25 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:26.481+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:26 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:26 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 09:33:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:26.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 09:33:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 09:33:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:26.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 09:33:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:27.500+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:27 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:27 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:28.537+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:28 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:28 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:28 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:28.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 09:33:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:28.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 09:33:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:29.582+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:29 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:29 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:30 np0005592159 nova_compute[226433]: 2026-01-22 14:33:30.223 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:33:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:30.607+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:30 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:30.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 09:33:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:30.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 09:33:30 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:30 np0005592159 podman[251679]: 2026-01-22 14:33:30.999126411 +0000 UTC m=+0.057049199 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:33:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:31.570+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:31 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:31 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:32.608+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:32 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:32.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 09:33:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:32.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 09:33:32 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:32 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:33.603+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:33 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:34 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:34.644+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:34 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 09:33:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:34.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 09:33:34 np0005592159 nova_compute[226433]: 2026-01-22 14:33:34.949 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance in state 1 after 54 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Jan 22 09:33:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 09:33:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:34.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 09:33:35 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:35 np0005592159 nova_compute[226433]: 2026-01-22 14:33:35.224 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:33:35 np0005592159 nova_compute[226433]: 2026-01-22 14:33:35.226 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:33:35 np0005592159 nova_compute[226433]: 2026-01-22 14:33:35.226 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:33:35 np0005592159 nova_compute[226433]: 2026-01-22 14:33:35.226 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:33:35 np0005592159 nova_compute[226433]: 2026-01-22 14:33:35.226 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:33:35 np0005592159 nova_compute[226433]: 2026-01-22 14:33:35.227 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:33:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:35.644+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:35 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:36 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:36.693+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:36 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 09:33:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:36.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 09:33:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 09:33:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:36.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 09:33:37 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:37.726+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:37 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:38 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:38 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:38.728+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:38 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 09:33:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:38.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 09:33:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 09:33:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:38.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 09:33:39 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:39.696+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:39 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:40 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:40 np0005592159 nova_compute[226433]: 2026-01-22 14:33:40.228 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:33:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:40.717+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:40 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 09:33:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:40.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 09:33:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 09:33:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:40.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 09:33:40 np0005592159 nova_compute[226433]: 2026-01-22 14:33:40.978 226437 INFO nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance failed to shutdown in 60 seconds.#033[00m
Jan 22 09:33:41 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:41.682+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:41 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:42 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:42.716+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:42 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:42.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:42.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:43 np0005592159 podman[251756]: 2026-01-22 14:33:43.058134492 +0000 UTC m=+0.116161619 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 22 09:33:43 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:43 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:43.743+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:43 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:44 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:44.763+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:44 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 09:33:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:44.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 09:33:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 09:33:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:44.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 09:33:45 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:45 np0005592159 nova_compute[226433]: 2026-01-22 14:33:45.231 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:33:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:45.769+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:45 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:46 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:46.769+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:46 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:46.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:46.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:33:47.208 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:33:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:33:47.209 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:33:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:33:47.209 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:33:47 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:47.725+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:47 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:48 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:48 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:48.710+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:48 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:48.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 09:33:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:48.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 09:33:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:49 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:49.739+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:49 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:50 np0005592159 nova_compute[226433]: 2026-01-22 14:33:50.233 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:33:50 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:50.731+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:50 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 09:33:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:50.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 09:33:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:50.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:51 np0005592159 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000015.scope: Deactivated successfully.
Jan 22 09:33:51 np0005592159 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000015.scope: Consumed 16.518s CPU time.
Jan 22 09:33:51 np0005592159 systemd-machined[194970]: Machine qemu-5-instance-00000015 terminated.
Jan 22 09:33:51 np0005592159 nova_compute[226433]: 2026-01-22 14:33:51.222 226437 INFO nova.virt.libvirt.driver [-] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance destroyed successfully.#033[00m
Jan 22 09:33:51 np0005592159 nova_compute[226433]: 2026-01-22 14:33:51.228 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:33:51 np0005592159 nova_compute[226433]: 2026-01-22 14:33:51.229 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:33:51 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:51 np0005592159 nova_compute[226433]: 2026-01-22 14:33:51.368 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:33:51 np0005592159 nova_compute[226433]: 2026-01-22 14:33:51.369 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:33:51 np0005592159 nova_compute[226433]: 2026-01-22 14:33:51.370 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:33:51 np0005592159 nova_compute[226433]: 2026-01-22 14:33:51.655 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:33:51 np0005592159 nova_compute[226433]: 2026-01-22 14:33:51.656 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquired lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:33:51 np0005592159 nova_compute[226433]: 2026-01-22 14:33:51.656 226437 DEBUG nova.network.neutron [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:33:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:51.778+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:51 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:51 np0005592159 nova_compute[226433]: 2026-01-22 14:33:51.935 226437 DEBUG nova.network.neutron [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:33:52 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:52 np0005592159 nova_compute[226433]: 2026-01-22 14:33:52.294 226437 DEBUG nova.network.neutron [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:33:52 np0005592159 nova_compute[226433]: 2026-01-22 14:33:52.318 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Releasing lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:33:52 np0005592159 nova_compute[226433]: 2026-01-22 14:33:52.437 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Starting finish_migration finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11698#033[00m
Jan 22 09:33:52 np0005592159 nova_compute[226433]: 2026-01-22 14:33:52.439 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719#033[00m
Jan 22 09:33:52 np0005592159 nova_compute[226433]: 2026-01-22 14:33:52.440 226437 INFO nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Creating image(s)#033[00m
Jan 22 09:33:52 np0005592159 nova_compute[226433]: 2026-01-22 14:33:52.493 226437 DEBUG nova.storage.rbd_utils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] creating snapshot(nova-resize) on rbd image(33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Jan 22 09:33:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:52.730+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:52 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:52.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 09:33:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:52.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 09:33:53 np0005592159 nova_compute[226433]: 2026-01-22 14:33:53.197 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:33:53 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:53 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:53.766+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:53 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:54 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:54.806+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:54 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:54.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000016s ======
Jan 22 09:33:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:54.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Jan 22 09:33:55 np0005592159 nova_compute[226433]: 2026-01-22 14:33:55.273 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:33:55 np0005592159 nova_compute[226433]: 2026-01-22 14:33:55.274 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:33:55 np0005592159 nova_compute[226433]: 2026-01-22 14:33:55.274 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5039 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Jan 22 09:33:55 np0005592159 nova_compute[226433]: 2026-01-22 14:33:55.274 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:33:55 np0005592159 nova_compute[226433]: 2026-01-22 14:33:55.275 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Jan 22 09:33:55 np0005592159 nova_compute[226433]: 2026-01-22 14:33:55.275 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:33:55 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:55.842+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:55 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:56.845+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:56 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:56 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:56 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:56 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:56.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:56.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:57.861+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:57 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:57 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:57 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:57 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:57 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:33:58 np0005592159 nova_compute[226433]: 2026-01-22 14:33:58.511 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:33:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:58.894+0000 7f47f8ed4640 -1 osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:58 np0005592159 ceph-osd[79779]: osd.2 150 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:58 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:33:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:33:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:33:58 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:33:58.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e151 e151: 3 total, 3 up, 3 in
Jan 22 09:33:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:33:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:33:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:33:59.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:33:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:33:59 np0005592159 nova_compute[226433]: 2026-01-22 14:33:59.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:33:59 np0005592159 nova_compute[226433]: 2026-01-22 14:33:59.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:33:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:33:59.924+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:59 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:33:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:33:59 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:00 np0005592159 nova_compute[226433]: 2026-01-22 14:34:00.277 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:00 np0005592159 nova_compute[226433]: 2026-01-22 14:34:00.516 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:34:00 np0005592159 nova_compute[226433]: 2026-01-22 14:34:00.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Jan 22 09:34:00 np0005592159 nova_compute[226433]: 2026-01-22 14:34:00.548 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:34:00 np0005592159 nova_compute[226433]: 2026-01-22 14:34:00.549 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquired lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:34:00 np0005592159 nova_compute[226433]: 2026-01-22 14:34:00.549 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Jan 22 09:34:00 np0005592159 nova_compute[226433]: 2026-01-22 14:34:00.786 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:34:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:00.923+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:00 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:00.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:00 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:01.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:01 np0005592159 nova_compute[226433]: 2026-01-22 14:34:01.576 226437 DEBUG nova.network.neutron [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:34:01 np0005592159 nova_compute[226433]: 2026-01-22 14:34:01.599 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Releasing lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:34:01 np0005592159 nova_compute[226433]: 2026-01-22 14:34:01.600 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Jan 22 09:34:01 np0005592159 nova_compute[226433]: 2026-01-22 14:34:01.601 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:34:01 np0005592159 nova_compute[226433]: 2026-01-22 14:34:01.601 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:34:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:01.968+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:01 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:02 np0005592159 podman[252008]: 2026-01-22 14:34:02.048760569 +0000 UTC m=+0.093863646 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:34:02 np0005592159 nova_compute[226433]: 2026-01-22 14:34:02.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:34:02 np0005592159 nova_compute[226433]: 2026-01-22 14:34:02.516 226437 DEBUG nova.compute.manager [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Jan 22 09:34:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:02.925+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:02 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 09:34:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:02.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 09:34:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:03.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:03 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:03 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:03 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:03.955+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:03 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:04 np0005592159 nova_compute[226433]: 2026-01-22 14:34:04.515 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:34:04 np0005592159 nova_compute[226433]: 2026-01-22 14:34:04.542 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:04 np0005592159 nova_compute[226433]: 2026-01-22 14:34:04.542 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:04 np0005592159 nova_compute[226433]: 2026-01-22 14:34:04.542 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:04 np0005592159 nova_compute[226433]: 2026-01-22 14:34:04.543 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Jan 22 09:34:04 np0005592159 nova_compute[226433]: 2026-01-22 14:34:04.543 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:04.929+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:04 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:04.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:34:04 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2051152007' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.010 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:05.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.090 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.091 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.094 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.094 226437 DEBUG nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Jan 22 09:34:05 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:05 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:34:05 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.262 226437 WARNING nova.virt.libvirt.driver [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.263 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4568MB free_disk=20.733367919921875GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.264 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.264 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.278 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.356 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Applying migration context for instance 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 as it has an incoming, in-progress migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0. Migration status is post-migrating _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:950#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.357 226437 INFO nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating resource usage from migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.402 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 001ba9a6-ba0c-438d-8150-5cfbcec3d34f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.402 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance f591d61b-712e-49aa-85bd-8d222b607eb3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.402 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance a0b3924b-4422-47c5-ba40-748e41b14d00 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.402 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance e0e74330-96df-479f-8baf-53fbd2ccba91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.403 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.403 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 8331b067-1b3f-4a1d-a596-e966f6de776a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.403 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0 is active on this compute host and has allocations in placement: {'resources': {'VCPU': 1, 'MEMORY_MB': 128, 'DISK_GB': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1640#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.403 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Instance 87e798e6-6f00-4fe1-8412-75ddc9e2878e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.403 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Total usable vcpus: 8, total allocated vcpus: 8 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.403 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7679MB used_ram=1600MB phys_disk=20GB used_disk=8GB total_vcpus=8 used_vcpus=8 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Jan 22 09:34:05 np0005592159 nova_compute[226433]: 2026-01-22 14:34:05.679 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:05.974+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:05 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:34:06 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2576899971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:34:06 np0005592159 nova_compute[226433]: 2026-01-22 14:34:06.074 226437 DEBUG oslo_concurrency.processutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:06 np0005592159 nova_compute[226433]: 2026-01-22 14:34:06.080 226437 DEBUG nova.compute.provider_tree [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:34:06 np0005592159 nova_compute[226433]: 2026-01-22 14:34:06.100 226437 DEBUG nova.scheduler.client.report [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:34:06 np0005592159 nova_compute[226433]: 2026-01-22 14:34:06.126 226437 DEBUG nova.compute.resource_tracker [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Jan 22 09:34:06 np0005592159 nova_compute[226433]: 2026-01-22 14:34:06.126 226437 DEBUG oslo_concurrency.lockutils [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:06 np0005592159 nova_compute[226433]: 2026-01-22 14:34:06.219 226437 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769092431.2180736, 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:34:06 np0005592159 nova_compute[226433]: 2026-01-22 14:34:06.219 226437 INFO nova.compute.manager [-] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] VM Stopped (Lifecycle Event)#033[00m
Jan 22 09:34:06 np0005592159 nova_compute[226433]: 2026-01-22 14:34:06.243 226437 DEBUG nova.compute.manager [None req-49dcdea9-b2d6-4f33-b7a2-4960e03f3053 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:34:06 np0005592159 nova_compute[226433]: 2026-01-22 14:34:06.247 226437 DEBUG nova.compute.manager [None req-49dcdea9-b2d6-4f33-b7a2-4960e03f3053 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:34:06 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:06 np0005592159 nova_compute[226433]: 2026-01-22 14:34:06.279 226437 INFO nova.compute.manager [None req-49dcdea9-b2d6-4f33-b7a2-4960e03f3053 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 22 09:34:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:06.975+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:06 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:06.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:07.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.056 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Acquiring lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.057 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.088 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.175 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.175 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.182 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.182 226437 INFO nova.compute.claims [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Claim successful on node compute-2.ctlplane.example.com#033[00m
Jan 22 09:34:07 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.433 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:07 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:34:07 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2163323487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.842 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.848 226437 DEBUG nova.compute.provider_tree [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.869 226437 DEBUG nova.scheduler.client.report [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.893 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.894 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:34:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:07.941+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:07 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.943 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.943 226437 DEBUG nova.network.neutron [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.976 226437 INFO nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Ignoring supplied device name: /dev/sda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:34:07 np0005592159 nova_compute[226433]: 2026-01-22 14:34:07.993 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:34:08 np0005592159 nova_compute[226433]: 2026-01-22 14:34:08.112 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:34:08 np0005592159 nova_compute[226433]: 2026-01-22 14:34:08.113 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:34:08 np0005592159 nova_compute[226433]: 2026-01-22 14:34:08.113 226437 INFO nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Creating image(s)#033[00m
Jan 22 09:34:08 np0005592159 nova_compute[226433]: 2026-01-22 14:34:08.139 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:34:08 np0005592159 nova_compute[226433]: 2026-01-22 14:34:08.165 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:34:08 np0005592159 nova_compute[226433]: 2026-01-22 14:34:08.194 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:34:08 np0005592159 nova_compute[226433]: 2026-01-22 14:34:08.197 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Acquiring lock "e47f52dd8ba9b9798349c19f2b626bd4b933ad74" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:08 np0005592159 nova_compute[226433]: 2026-01-22 14:34:08.197 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "e47f52dd8ba9b9798349c19f2b626bd4b933ad74" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:08 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:08 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3438 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:08 np0005592159 nova_compute[226433]: 2026-01-22 14:34:08.522 226437 DEBUG nova.policy [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'dffdbec5046d4aaf94146923e1681ea1', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f3ac78c8a3fa42b39e64829385672445', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 22 09:34:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:08.906+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:08 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:08.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:09.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:09 np0005592159 nova_compute[226433]: 2026-01-22 14:34:09.035 226437 DEBUG nova.virt.libvirt.imagebackend [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Image locations are: [{'url': 'rbd://088fe176-0106-5401-803c-2da38b73b76a/images/a2fdc415-533a-451d-9678-120e6e30afc5/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://088fe176-0106-5401-803c-2da38b73b76a/images/a2fdc415-533a-451d-9678-120e6e30afc5/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Jan 22 09:34:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:09 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:09 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:09.876+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:09 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:09.879 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:34:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:09.880 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:34:09 np0005592159 nova_compute[226433]: 2026-01-22 14:34:09.891 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:10 np0005592159 nova_compute[226433]: 2026-01-22 14:34:10.004 226437 DEBUG nova.network.neutron [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Successfully created port: e581f563-3369-4b65-92c8-89785e787b51 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 22 09:34:10 np0005592159 nova_compute[226433]: 2026-01-22 14:34:10.267 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:10 np0005592159 nova_compute[226433]: 2026-01-22 14:34:10.290 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:10 np0005592159 nova_compute[226433]: 2026-01-22 14:34:10.355 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.part --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:10 np0005592159 nova_compute[226433]: 2026-01-22 14:34:10.357 226437 DEBUG nova.virt.images [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] a2fdc415-533a-451d-9678-120e6e30afc5 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Jan 22 09:34:10 np0005592159 nova_compute[226433]: 2026-01-22 14:34:10.359 226437 DEBUG nova.privsep.utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Jan 22 09:34:10 np0005592159 nova_compute[226433]: 2026-01-22 14:34:10.359 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.part /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:10 np0005592159 nova_compute[226433]: 2026-01-22 14:34:10.540 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.part /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.converted" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:10 np0005592159 nova_compute[226433]: 2026-01-22 14:34:10.544 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:10 np0005592159 nova_compute[226433]: 2026-01-22 14:34:10.595 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74.converted --force-share --output=json" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:10 np0005592159 nova_compute[226433]: 2026-01-22 14:34:10.597 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "e47f52dd8ba9b9798349c19f2b626bd4b933ad74" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:10 np0005592159 nova_compute[226433]: 2026-01-22 14:34:10.626 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:34:10 np0005592159 nova_compute[226433]: 2026-01-22 14:34:10.630 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:10 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:10 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:10.832+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:10.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:11.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.040 226437 DEBUG nova.network.neutron [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Successfully updated port: e581f563-3369-4b65-92c8-89785e787b51 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.050 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/e47f52dd8ba9b9798349c19f2b626bd4b933ad74 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.082 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Acquiring lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.083 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Acquired lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.083 226437 DEBUG nova.network.neutron [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.131 226437 DEBUG nova.compute.manager [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Received event network-changed-e581f563-3369-4b65-92c8-89785e787b51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.132 226437 DEBUG nova.compute.manager [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Refreshing instance network info cache due to event network-changed-e581f563-3369-4b65-92c8-89785e787b51. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.132 226437 DEBUG oslo_concurrency.lockutils [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.139 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] resizing rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.239 226437 DEBUG nova.objects.instance [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lazy-loading 'migration_context' on Instance uuid 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.256 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.256 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Ensure instance console log exists: /var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.257 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.258 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.258 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:11 np0005592159 nova_compute[226433]: 2026-01-22 14:34:11.341 226437 DEBUG nova.network.neutron [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:34:11 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:11.793+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:11 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.111 226437 DEBUG nova.network.neutron [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Updating instance_info_cache with network_info: [{"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.269 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Releasing lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.269 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Instance network_info: |[{"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.270 226437 DEBUG oslo_concurrency.lockutils [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.270 226437 DEBUG nova.network.neutron [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Refreshing network info cache for port e581f563-3369-4b65-92c8-89785e787b51 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.276 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Start _get_guest_xml network_info=[{"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'scsi', 'cdrom_bus': 'scsi', 'mapping': {'root': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'scsi', 'dev': 'sda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'scsi', 'dev': 'sdb', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T14:33:57Z,direct_url=<?>,disk_format='qcow2',id=a2fdc415-533a-451d-9678-120e6e30afc5,min_disk=0,min_ram=0,name='',owner='fedf0aaa09a64f7ba34cf04c2e4f7c97',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T14:33:59Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/sda', 'image': [{'encrypted': False, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'disk_bus': 'scsi', 'device_name': '/dev/sda', 'image_id': 'a2fdc415-533a-451d-9678-120e6e30afc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.282 226437 WARNING nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.318 226437 DEBUG nova.virt.libvirt.host [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.319 226437 DEBUG nova.virt.libvirt.host [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.323 226437 DEBUG nova.virt.libvirt.host [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.324 226437 DEBUG nova.virt.libvirt.host [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.326 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.326 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T14:33:57Z,direct_url=<?>,disk_format='qcow2',id=a2fdc415-533a-451d-9678-120e6e30afc5,min_disk=0,min_ram=0,name='',owner='fedf0aaa09a64f7ba34cf04c2e4f7c97',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T14:33:59Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.327 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.328 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.328 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.329 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.329 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.330 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.330 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.331 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.331 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.332 226437 DEBUG nova.virt.hardware [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.337 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:12 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:34:12 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3564186839' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.803 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:12 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:12.810+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:12 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.828 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:34:12 np0005592159 nova_compute[226433]: 2026-01-22 14:34:12.833 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:12 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:12.882 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:34:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000015s ======
Jan 22 09:34:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:12.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Jan 22 09:34:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:13.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:34:13 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1951221719' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.265 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.267 226437 DEBUG nova.virt.libvirt.vif [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T14:34:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-673350482',display_name='tempest-AttachSCSIVolumeTestJSON-server-673350482',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-673350482',id=22,image_ref='a2fdc415-533a-451d-9678-120e6e30afc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPD0y3mq9CfOHokaR31LEO/NdlTki7hmL1Lmoupuqg1kWxHy0vOWCB8Qr7HBmO03ylnoCixzCBjeQqzIRrpgVE512GDKdI5XzcntJi8Mu2wzHF18nKGhhZcU5kWNmNOuYA==',key_name='tempest-keypair-2020706736',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f3ac78c8a3fa42b39e64829385672445',ramdisk_id='',reservation_id='r-9hbea8q0',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a2fdc415-533a-451d-9678-120e6e30afc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-952968705',owner_user_name='tempest-AttachSCSIVolumeTestJSON-952968705-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:34:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='dffdbec5046d4aaf94146923e1681ea1',uuid=839e8e64-64a9-4e35-85dd-cdbb7f8e71c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.268 226437 DEBUG nova.network.os_vif_util [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Converting VIF {"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.269 226437 DEBUG nova.network.os_vif_util [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:f2:b5,bridge_name='br-int',has_traffic_filtering=True,id=e581f563-3369-4b65-92c8-89785e787b51,network=Network(e70febd3-9995-42cd-a322-30bf5db3445d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape581f563-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.270 226437 DEBUG nova.objects.instance [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lazy-loading 'pci_devices' on Instance uuid 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.301 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] End _get_guest_xml xml=<domain type="kvm">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  <uuid>839e8e64-64a9-4e35-85dd-cdbb7f8e71c5</uuid>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  <name>instance-00000016</name>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  <memory>131072</memory>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  <vcpu>1</vcpu>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  <metadata>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <nova:name>tempest-AttachSCSIVolumeTestJSON-server-673350482</nova:name>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <nova:creationTime>2026-01-22 14:34:12</nova:creationTime>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <nova:flavor name="m1.nano">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <nova:memory>128</nova:memory>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <nova:disk>1</nova:disk>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <nova:swap>0</nova:swap>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <nova:vcpus>1</nova:vcpus>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      </nova:flavor>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <nova:owner>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <nova:user uuid="dffdbec5046d4aaf94146923e1681ea1">tempest-AttachSCSIVolumeTestJSON-952968705-project-member</nova:user>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <nova:project uuid="f3ac78c8a3fa42b39e64829385672445">tempest-AttachSCSIVolumeTestJSON-952968705</nova:project>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      </nova:owner>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <nova:root type="image" uuid="a2fdc415-533a-451d-9678-120e6e30afc5"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <nova:ports>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <nova:port uuid="e581f563-3369-4b65-92c8-89785e787b51">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        </nova:port>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      </nova:ports>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    </nova:instance>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  </metadata>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  <sysinfo type="smbios">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <system>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <entry name="manufacturer">RDO</entry>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <entry name="product">OpenStack Compute</entry>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <entry name="serial">839e8e64-64a9-4e35-85dd-cdbb7f8e71c5</entry>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <entry name="uuid">839e8e64-64a9-4e35-85dd-cdbb7f8e71c5</entry>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <entry name="family">Virtual Machine</entry>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    </system>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  </sysinfo>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  <os>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <boot dev="hd"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <smbios mode="sysinfo"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  </os>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  <features>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <acpi/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <apic/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <vmcoreinfo/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  </features>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  <clock offset="utc">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <timer name="hpet" present="no"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  </clock>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  <cpu mode="custom" match="exact">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <model>Nehalem</model>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  </cpu>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  <devices>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <disk type="network" device="disk">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <driver type="raw" cache="none"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <source protocol="rbd" name="vms/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      </source>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <auth username="openstack">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      </auth>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <target dev="sda" bus="scsi"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <address type="drive" controller="0" unit="0"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    </disk>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <disk type="network" device="cdrom">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <driver type="raw" cache="none"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <source protocol="rbd" name="vms/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk.config">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      </source>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <auth username="openstack">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      </auth>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <target dev="sdb" bus="scsi"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <address type="drive" controller="0" unit="1"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    </disk>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="scsi" index="0" model="virtio-scsi"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <interface type="ethernet">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <mac address="fa:16:3e:35:f2:b5"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <model type="virtio"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <driver name="vhost" rx_queue_size="512"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <mtu size="1442"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <target dev="tape581f563-33"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    </interface>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <serial type="pty">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <log file="/var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/console.log" append="off"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    </serial>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <video>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <model type="virtio"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    </video>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <input type="tablet" bus="usb"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <rng model="virtio">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <backend model="random">/dev/urandom</backend>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    </rng>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <controller type="usb" index="0"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    <memballoon model="virtio">
Jan 22 09:34:13 np0005592159 nova_compute[226433]:      <stats period="10"/>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:    </memballoon>
Jan 22 09:34:13 np0005592159 nova_compute[226433]:  </devices>
Jan 22 09:34:13 np0005592159 nova_compute[226433]: </domain>
Jan 22 09:34:13 np0005592159 nova_compute[226433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.303 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Preparing to wait for external event network-vif-plugged-e581f563-3369-4b65-92c8-89785e787b51 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.303 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Acquiring lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.304 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.304 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.305 226437 DEBUG nova.virt.libvirt.vif [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-22T14:34:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachSCSIVolumeTestJSON-server-673350482',display_name='tempest-AttachSCSIVolumeTestJSON-server-673350482',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-attachscsivolumetestjson-server-673350482',id=22,image_ref='a2fdc415-533a-451d-9678-120e6e30afc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPD0y3mq9CfOHokaR31LEO/NdlTki7hmL1Lmoupuqg1kWxHy0vOWCB8Qr7HBmO03ylnoCixzCBjeQqzIRrpgVE512GDKdI5XzcntJi8Mu2wzHF18nKGhhZcU5kWNmNOuYA==',key_name='tempest-keypair-2020706736',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f3ac78c8a3fa42b39e64829385672445',ramdisk_id='',reservation_id='r-9hbea8q0',resources=None,root_device_name='/dev/sda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='a2fdc415-533a-451d-9678-120e6e30afc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='scsi',image_hw_disk_bus='scsi',image_hw_machine_type='q35',image_hw_scsi_model='virtio-scsi',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachSCSIVolumeTestJSON-952968705',owner_user_name='tempest-AttachSCSIVolumeTestJSON-952968705-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-22T14:34:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='dffdbec5046d4aaf94146923e1681ea1',uuid=839e8e64-64a9-4e35-85dd-cdbb7f8e71c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.306 226437 DEBUG nova.network.os_vif_util [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Converting VIF {"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.306 226437 DEBUG nova.network.os_vif_util [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:f2:b5,bridge_name='br-int',has_traffic_filtering=True,id=e581f563-3369-4b65-92c8-89785e787b51,network=Network(e70febd3-9995-42cd-a322-30bf5db3445d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape581f563-33') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.307 226437 DEBUG os_vif [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:f2:b5,bridge_name='br-int',has_traffic_filtering=True,id=e581f563-3369-4b65-92c8-89785e787b51,network=Network(e70febd3-9995-42cd-a322-30bf5db3445d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape581f563-33') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.308 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.308 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.309 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.313 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.314 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape581f563-33, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.314 226437 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape581f563-33, col_values=(('external_ids', {'iface-id': 'e581f563-3369-4b65-92c8-89785e787b51', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:f2:b5', 'vm-uuid': '839e8e64-64a9-4e35-85dd-cdbb7f8e71c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:34:13 np0005592159 NetworkManager[49000]: <info>  [1769092453.3173] manager: (tape581f563-33): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.316 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.319 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.325 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.326 226437 INFO os_vif [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:f2:b5,bridge_name='br-int',has_traffic_filtering=True,id=e581f563-3369-4b65-92c8-89785e787b51,network=Network(e70febd3-9995-42cd-a322-30bf5db3445d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape581f563-33')#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.375 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.376 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] No BDM found with device name sdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.376 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] No VIF found with MAC fa:16:3e:35:f2:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.377 226437 INFO nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Using config drive#033[00m
Jan 22 09:34:13 np0005592159 nova_compute[226433]: 2026-01-22 14:34:13.400 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:34:13 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:13 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3443 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:13.843+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:13 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:14 np0005592159 podman[252410]: 2026-01-22 14:34:14.068242112 +0000 UTC m=+0.119230626 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:34:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:14 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:14.886+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:14 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:14.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:15.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:15 np0005592159 nova_compute[226433]: 2026-01-22 14:34:15.283 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:15 np0005592159 nova_compute[226433]: 2026-01-22 14:34:15.386 226437 INFO nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Creating config drive at /var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/disk.config#033[00m
Jan 22 09:34:15 np0005592159 nova_compute[226433]: 2026-01-22 14:34:15.395 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp61amt8xc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:15 np0005592159 nova_compute[226433]: 2026-01-22 14:34:15.529 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp61amt8xc" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:15 np0005592159 nova_compute[226433]: 2026-01-22 14:34:15.556 226437 DEBUG nova.storage.rbd_utils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] rbd image 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Jan 22 09:34:15 np0005592159 nova_compute[226433]: 2026-01-22 14:34:15.560 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/disk.config 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:15 np0005592159 nova_compute[226433]: 2026-01-22 14:34:15.759 226437 DEBUG oslo_concurrency.processutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/disk.config 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:15 np0005592159 nova_compute[226433]: 2026-01-22 14:34:15.760 226437 INFO nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Deleting local config drive /var/lib/nova/instances/839e8e64-64a9-4e35-85dd-cdbb7f8e71c5/disk.config because it was imported into RBD.#033[00m
Jan 22 09:34:15 np0005592159 NetworkManager[49000]: <info>  [1769092455.8298] manager: (tape581f563-33): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Jan 22 09:34:15 np0005592159 kernel: tape581f563-33: entered promiscuous mode
Jan 22 09:34:15 np0005592159 nova_compute[226433]: 2026-01-22 14:34:15.837 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:15 np0005592159 ovn_controller[133156]: 2026-01-22T14:34:15Z|00057|binding|INFO|Claiming lport e581f563-3369-4b65-92c8-89785e787b51 for this chassis.
Jan 22 09:34:15 np0005592159 ovn_controller[133156]: 2026-01-22T14:34:15Z|00058|binding|INFO|e581f563-3369-4b65-92c8-89785e787b51: Claiming fa:16:3e:35:f2:b5 10.100.0.11
Jan 22 09:34:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.850 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:f2:b5 10.100.0.11'], port_security=['fa:16:3e:35:f2:b5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '839e8e64-64a9-4e35-85dd-cdbb7f8e71c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e70febd3-9995-42cd-a322-30bf5db3445d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f3ac78c8a3fa42b39e64829385672445', 'neutron:revision_number': '2', 'neutron:security_group_ids': '28729834-6047-40c0-87ed-a5757ce1c57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8526bd5b-b1c9-4a14-b4ce-8f8562154268, chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=e581f563-3369-4b65-92c8-89785e787b51) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:34:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.853 143497 INFO neutron.agent.ovn.metadata.agent [-] Port e581f563-3369-4b65-92c8-89785e787b51 in datapath e70febd3-9995-42cd-a322-30bf5db3445d bound to our chassis#033[00m
Jan 22 09:34:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.856 143497 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e70febd3-9995-42cd-a322-30bf5db3445d#033[00m
Jan 22 09:34:15 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:15 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:15.858+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:15 np0005592159 systemd-udevd[252494]: Network interface NamePolicy= disabled on kernel command line.
Jan 22 09:34:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.870 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[0db13fcd-9350-496f-be04-86ddaccdcf45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.871 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape70febd3-91 in ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Jan 22 09:34:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.874 237689 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape70febd3-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Jan 22 09:34:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.874 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[71db2b83-41f4-4c9a-93ab-70b270062635]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.875 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[236964a7-fed6-4172-8e96-0950c34fb08a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:15 np0005592159 systemd-machined[194970]: New machine qemu-6-instance-00000016.
Jan 22 09:34:15 np0005592159 NetworkManager[49000]: <info>  [1769092455.8828] device (tape581f563-33): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 22 09:34:15 np0005592159 NetworkManager[49000]: <info>  [1769092455.8834] device (tape581f563-33): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 22 09:34:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.888 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[130d2c05-01a9-49e3-b8f6-6f68315c8ee4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:15 np0005592159 systemd[1]: Started Virtual Machine qemu-6-instance-00000016.
Jan 22 09:34:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.914 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[7723447c-9103-4169-ade3-72c5877a6e91]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:15 np0005592159 nova_compute[226433]: 2026-01-22 14:34:15.934 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.937 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[38046e87-465f-4e31-bce6-ca4351f74ed4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:15 np0005592159 ovn_controller[133156]: 2026-01-22T14:34:15Z|00059|binding|INFO|Setting lport e581f563-3369-4b65-92c8-89785e787b51 ovn-installed in OVS
Jan 22 09:34:15 np0005592159 ovn_controller[133156]: 2026-01-22T14:34:15Z|00060|binding|INFO|Setting lport e581f563-3369-4b65-92c8-89785e787b51 up in Southbound
Jan 22 09:34:15 np0005592159 nova_compute[226433]: 2026-01-22 14:34:15.941 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.944 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[5f470856-e040-47a2-8cb0-b0af7c7c574f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:15 np0005592159 NetworkManager[49000]: <info>  [1769092455.9458] manager: (tape70febd3-90): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Jan 22 09:34:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.975 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb106f1-7089-465d-a4a6-aba7925f6da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:15 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:15.978 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[8f36051a-11ce-43cb-8682-648ddfd6f9f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:16 np0005592159 NetworkManager[49000]: <info>  [1769092456.0049] device (tape70febd3-90): carrier: link connected
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.010 237788 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3c116a-ef33-46ad-9fdc-0afa18c29b75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.030 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[5d16bfe4-fc0a-424a-afc7-84d0c7fca592]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape70febd3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:0c:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630671, 'reachable_time': 20364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252531, 'error': None, 'target': 'ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.046 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[24b3259f-8be6-4eaf-91e3-f8e2c9f11cf6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefa:c26'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630671, 'tstamp': 630671}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252533, 'error': None, 'target': 'ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.061 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd623e4-8c94-499b-991c-7b3683a64dbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape70febd3-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fa:0c:26'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630671, 'reachable_time': 20364, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252534, 'error': None, 'target': 'ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.093 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[b0dc77e7-fb2e-4bfb-9f3e-a85126bf3376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.145 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[86da6caf-a609-4433-8ace-f555714bf187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.146 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape70febd3-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.147 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.147 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape70febd3-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.266 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:16 np0005592159 NetworkManager[49000]: <info>  [1769092456.2669] manager: (tape70febd3-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 22 09:34:16 np0005592159 kernel: tape70febd3-90: entered promiscuous mode
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.272 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.273 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape70febd3-90, col_values=(('external_ids', {'iface-id': '3c983055-ff9e-4976-9d9f-e2b4b8598736'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.274 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:16 np0005592159 ovn_controller[133156]: 2026-01-22T14:34:16Z|00061|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.294 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.299 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.300 143497 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e70febd3-9995-42cd-a322-30bf5db3445d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e70febd3-9995-42cd-a322-30bf5db3445d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.301 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[0906251e-fcc0-4a4a-964a-709789f6e945]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.302 143497 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: global
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    log         /dev/log local0 debug
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    log-tag     haproxy-metadata-proxy-e70febd3-9995-42cd-a322-30bf5db3445d
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    user        root
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    group       root
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    maxconn     1024
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    pidfile     /var/lib/neutron/external/pids/e70febd3-9995-42cd-a322-30bf5db3445d.pid.haproxy
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    daemon
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: defaults
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    log global
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    mode http
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    option httplog
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    option dontlognull
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    option http-server-close
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    option forwardfor
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    retries                 3
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    timeout http-request    30s
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    timeout connect         30s
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    timeout client          32s
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    timeout server          32s
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    timeout http-keep-alive 30s
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: listen listener
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    bind 169.254.169.254:80
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    server metadata /var/lib/neutron/metadata_proxy
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]:    http-request add-header X-OVN-Network-ID e70febd3-9995-42cd-a322-30bf5db3445d
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Jan 22 09:34:16 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:16.302 143497 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d', 'env', 'PROCESS_TAG=haproxy-e70febd3-9995-42cd-a322-30bf5db3445d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e70febd3-9995-42cd-a322-30bf5db3445d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.443 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092456.4430985, 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.445 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] VM Started (Lifecycle Event)#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.486 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.491 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092456.443212, 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.491 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] VM Paused (Lifecycle Event)#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.522 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.527 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.549 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.563 226437 DEBUG nova.compute.manager [req-900780ff-cd97-4469-9161-2b8a94435d5c req-15bec1a0-156b-4dcf-af7f-3c05057796bf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Received event network-vif-plugged-e581f563-3369-4b65-92c8-89785e787b51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.563 226437 DEBUG oslo_concurrency.lockutils [req-900780ff-cd97-4469-9161-2b8a94435d5c req-15bec1a0-156b-4dcf-af7f-3c05057796bf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.564 226437 DEBUG oslo_concurrency.lockutils [req-900780ff-cd97-4469-9161-2b8a94435d5c req-15bec1a0-156b-4dcf-af7f-3c05057796bf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.564 226437 DEBUG oslo_concurrency.lockutils [req-900780ff-cd97-4469-9161-2b8a94435d5c req-15bec1a0-156b-4dcf-af7f-3c05057796bf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.564 226437 DEBUG nova.compute.manager [req-900780ff-cd97-4469-9161-2b8a94435d5c req-15bec1a0-156b-4dcf-af7f-3c05057796bf 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Processing event network-vif-plugged-e581f563-3369-4b65-92c8-89785e787b51 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.565 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.570 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092456.5692425, 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.570 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] VM Resumed (Lifecycle Event)#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.571 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.575 226437 INFO nova.virt.libvirt.driver [-] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Instance spawned successfully.#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.575 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Attempting to register defaults for the following image properties: ['hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.579 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.579 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.580 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.580 226437 DEBUG nova.virt.libvirt.driver [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.589 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.593 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.640 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.661 226437 INFO nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Took 8.55 seconds to spawn the instance on the hypervisor.#033[00m
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.662 226437 DEBUG nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:34:16 np0005592159 podman[252613]: 2026-01-22 14:34:16.670319826 +0000 UTC m=+0.058805116 container create 43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 22 09:34:16 np0005592159 systemd[1]: Started libpod-conmon-43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857.scope.
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.727 226437 INFO nova.compute.manager [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Took 9.57 seconds to build instance.#033[00m
Jan 22 09:34:16 np0005592159 podman[252613]: 2026-01-22 14:34:16.636628088 +0000 UTC m=+0.025113458 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 22 09:34:16 np0005592159 systemd[1]: Started libcrun container.
Jan 22 09:34:16 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32d345afaa304af39e2e2833fda5b6655c176308d120bb6c3c940577074f3c39/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 22 09:34:16 np0005592159 nova_compute[226433]: 2026-01-22 14:34:16.757 226437 DEBUG oslo_concurrency.lockutils [None req-7a90d5da-8f3a-48aa-93b5-d3ccf7bc17ba dffdbec5046d4aaf94146923e1681ea1 f3ac78c8a3fa42b39e64829385672445 - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:16 np0005592159 podman[252613]: 2026-01-22 14:34:16.770376026 +0000 UTC m=+0.158861316 container init 43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 22 09:34:16 np0005592159 podman[252613]: 2026-01-22 14:34:16.782612145 +0000 UTC m=+0.171097435 container start 43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 09:34:16 np0005592159 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [NOTICE]   (252633) : New worker (252635) forked
Jan 22 09:34:16 np0005592159 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [NOTICE]   (252633) : Loading success.
Jan 22 09:34:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:16.852+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:16 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #115. Immutable memtables: 0.
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.888458) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 115
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456888540, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 1353, "num_deletes": 251, "total_data_size": 2459334, "memory_usage": 2501440, "flush_reason": "Manual Compaction"}
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #116: started
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456903515, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 116, "file_size": 1594051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57722, "largest_seqno": 59070, "table_properties": {"data_size": 1588533, "index_size": 2722, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14237, "raw_average_key_size": 20, "raw_value_size": 1576433, "raw_average_value_size": 2314, "num_data_blocks": 118, "num_entries": 681, "num_filter_entries": 681, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092378, "oldest_key_time": 1769092378, "file_creation_time": 1769092456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 15115 microseconds, and 8111 cpu microseconds.
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.903582) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #116: 1594051 bytes OK
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.903606) [db/memtable_list.cc:519] [default] Level-0 commit table #116 started
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.905150) [db/memtable_list.cc:722] [default] Level-0 commit table #116: memtable #1 done
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.905176) EVENT_LOG_v1 {"time_micros": 1769092456905167, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.905202) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 2452768, prev total WAL file size 2452768, number of live WAL files 2.
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000112.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.906494) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [116(1556KB)], [114(8825KB)]
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456906538, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [116], "files_L6": [114], "score": -1, "input_data_size": 10631789, "oldest_snapshot_seqno": -1}
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #117: 10249 keys, 8931494 bytes, temperature: kUnknown
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456968493, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 117, "file_size": 8931494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8875681, "index_size": 29077, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25669, "raw_key_size": 277545, "raw_average_key_size": 27, "raw_value_size": 8701348, "raw_average_value_size": 848, "num_data_blocks": 1083, "num_entries": 10249, "num_filter_entries": 10249, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092456, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 117, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.968722) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 8931494 bytes
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.970052) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.4 rd, 144.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.6 +0.0 blob) out(8.5 +0.0 blob), read-write-amplify(12.3) write-amplify(5.6) OK, records in: 10770, records dropped: 521 output_compression: NoCompression
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.970066) EVENT_LOG_v1 {"time_micros": 1769092456970059, "job": 72, "event": "compaction_finished", "compaction_time_micros": 62020, "compaction_time_cpu_micros": 21711, "output_level": 6, "num_output_files": 1, "total_output_size": 8931494, "num_input_records": 10770, "num_output_records": 10249, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456970372, "job": 72, "event": "table_file_deletion", "file_number": 116}
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000114.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092456971597, "job": 72, "event": "table_file_deletion", "file_number": 114}
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.906404) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.971644) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.971649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.971650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.971652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:34:16.971653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:34:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:16.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:17.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:17 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"} v 0) v1
Jan 22 09:34:17 np0005592159 ceph-mon[77081]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/2506543262' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]: dispatch
Jan 22 09:34:17 np0005592159 nova_compute[226433]: 2026-01-22 14:34:17.412 226437 DEBUG nova.network.neutron [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Updated VIF entry in instance network info cache for port e581f563-3369-4b65-92c8-89785e787b51. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 09:34:17 np0005592159 nova_compute[226433]: 2026-01-22 14:34:17.414 226437 DEBUG nova.network.neutron [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Updating instance_info_cache with network_info: [{"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:34:17 np0005592159 nova_compute[226433]: 2026-01-22 14:34:17.558 226437 DEBUG oslo_concurrency.lockutils [req-ee8180b9-8c11-4146-adbe-78599f7c94e7 req-d93d5728-c083-499c-9630-4e8a6a9f3b4d 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:34:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:17.867+0000 7f47f8ed4640 -1 osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:17 np0005592159 ceph-osd[79779]: osd.2 151 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:17 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:17 np0005592159 ceph-mon[77081]: from='client.? 192.168.122.102:0/2506543262' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]: dispatch
Jan 22 09:34:17 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.openstack' cmd=[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]: dispatch
Jan 22 09:34:17 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e152 e152: 3 total, 3 up, 3 in
Jan 22 09:34:18 np0005592159 nova_compute[226433]: 2026-01-22 14:34:18.318 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:18 np0005592159 nova_compute[226433]: 2026-01-22 14:34:18.687 226437 DEBUG nova.compute.manager [req-666ce1a9-560a-4c8c-a827-037bf8e4acb8 req-11060550-7e15-488f-8e07-6957e05ddb24 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Received event network-vif-plugged-e581f563-3369-4b65-92c8-89785e787b51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:34:18 np0005592159 nova_compute[226433]: 2026-01-22 14:34:18.689 226437 DEBUG oslo_concurrency.lockutils [req-666ce1a9-560a-4c8c-a827-037bf8e4acb8 req-11060550-7e15-488f-8e07-6957e05ddb24 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:18 np0005592159 nova_compute[226433]: 2026-01-22 14:34:18.689 226437 DEBUG oslo_concurrency.lockutils [req-666ce1a9-560a-4c8c-a827-037bf8e4acb8 req-11060550-7e15-488f-8e07-6957e05ddb24 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:18 np0005592159 nova_compute[226433]: 2026-01-22 14:34:18.690 226437 DEBUG oslo_concurrency.lockutils [req-666ce1a9-560a-4c8c-a827-037bf8e4acb8 req-11060550-7e15-488f-8e07-6957e05ddb24 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Lock "839e8e64-64a9-4e35-85dd-cdbb7f8e71c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:18 np0005592159 nova_compute[226433]: 2026-01-22 14:34:18.691 226437 DEBUG nova.compute.manager [req-666ce1a9-560a-4c8c-a827-037bf8e4acb8 req-11060550-7e15-488f-8e07-6957e05ddb24 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] No waiting events found dispatching network-vif-plugged-e581f563-3369-4b65-92c8-89785e787b51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Jan 22 09:34:18 np0005592159 nova_compute[226433]: 2026-01-22 14:34:18.691 226437 WARNING nova.compute.manager [req-666ce1a9-560a-4c8c-a827-037bf8e4acb8 req-11060550-7e15-488f-8e07-6957e05ddb24 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Received unexpected event network-vif-plugged-e581f563-3369-4b65-92c8-89785e787b51 for instance with vm_state active and task_state None.#033[00m
Jan 22 09:34:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:18.885+0000 7f47f8ed4640 -1 osd.2 152 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:18 np0005592159 ceph-osd[79779]: osd.2 152 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 e153: 3 total, 3 up, 3 in
Jan 22 09:34:18 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:18 np0005592159 ceph-mon[77081]: from='client.? ' entity='client.openstack' cmd='[{"prefix": "osd blocklist", "blocklistop": "add", "addr": "192.168.122.102:0/3735414885"}]': finished
Jan 22 09:34:18 np0005592159 nova_compute[226433]: 2026-01-22 14:34:18.975 226437 DEBUG nova.objects.instance [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'trusted_certs' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:34:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:18.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:19.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.110 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.110 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Ensure instance console log exists: /var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.111 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.111 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.112 226437 DEBUG oslo_concurrency.lockutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.113 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'size': 0, 'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_options': None, 'device_type': 'disk', 'encryption_format': None, 'guest_format': None, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'image_id': 'dc084f46-456d-429d-85f6-836af4fccd82'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.117 226437 WARNING nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.122 226437 DEBUG nova.virt.libvirt.host [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.123 226437 DEBUG nova.virt.libvirt.host [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.126 226437 DEBUG nova.virt.libvirt.host [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.127 226437 DEBUG nova.virt.libvirt.host [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.127 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.128 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:28Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='253cca7e-43a2-469f-8e4b-fd8b7bc3551a',id=2,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-22T13:59:30Z,direct_url=<?>,disk_format='qcow2',id=dc084f46-456d-429d-85f6-836af4fccd82,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='7bed6332af7b410aaef81905f1e9b7f9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-22T13:59:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.128 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.128 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.128 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.129 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.129 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.129 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.129 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.129 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.129 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.130 226437 DEBUG nova.virt.hardware [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.130 226437 DEBUG nova.objects.instance [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'vcpu_model' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.147 226437 DEBUG oslo_concurrency.processutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:34:19 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2725787254' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.579 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:19 np0005592159 NetworkManager[49000]: <info>  [1769092459.5810] manager: (patch-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/38)
Jan 22 09:34:19 np0005592159 NetworkManager[49000]: <info>  [1769092459.5820] device (patch-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 09:34:19 np0005592159 NetworkManager[49000]: <warn>  [1769092459.5822] device (patch-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 09:34:19 np0005592159 NetworkManager[49000]: <info>  [1769092459.5840] manager: (patch-br-int-to-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/39)
Jan 22 09:34:19 np0005592159 NetworkManager[49000]: <info>  [1769092459.5849] device (patch-br-int-to-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 22 09:34:19 np0005592159 NetworkManager[49000]: <warn>  [1769092459.5850] device (patch-br-int-to-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 22 09:34:19 np0005592159 NetworkManager[49000]: <info>  [1769092459.5869] manager: (patch-br-int-to-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 22 09:34:19 np0005592159 NetworkManager[49000]: <info>  [1769092459.5883] manager: (patch-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 22 09:34:19 np0005592159 NetworkManager[49000]: <info>  [1769092459.5893] device (patch-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 22 09:34:19 np0005592159 NetworkManager[49000]: <info>  [1769092459.5901] device (patch-br-int-to-provnet-2aab3bd6-35b9-42c5-a14a-a2deb89cba28)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.608 226437 DEBUG oslo_concurrency.processutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.649 226437 DEBUG oslo_concurrency.processutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:19.863+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:19 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.887 226437 DEBUG nova.compute.manager [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Received event network-changed-e581f563-3369-4b65-92c8-89785e787b51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.887 226437 DEBUG nova.compute.manager [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Refreshing instance network info cache due to event network-changed-e581f563-3369-4b65-92c8-89785e787b51. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.888 226437 DEBUG oslo_concurrency.lockutils [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.888 226437 DEBUG oslo_concurrency.lockutils [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.888 226437 DEBUG nova.network.neutron [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Refreshing network info cache for port e581f563-3369-4b65-92c8-89785e787b51 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 09:34:19 np0005592159 nova_compute[226433]: 2026-01-22 14:34:19.903 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:19 np0005592159 ovn_controller[133156]: 2026-01-22T14:34:19Z|00062|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.085 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Jan 22 09:34:20 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1676911576' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 22 09:34:20 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.115 226437 DEBUG oslo_concurrency.processutils [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.119 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] End _get_guest_xml xml=<domain type="kvm">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  <uuid>33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4</uuid>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  <name>instance-00000015</name>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  <memory>196608</memory>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  <vcpu>1</vcpu>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  <metadata>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <nova:name>tempest-MigrationsAdminTest-server-685681022</nova:name>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <nova:creationTime>2026-01-22 14:34:19</nova:creationTime>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <nova:flavor name="m1.micro">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <nova:memory>192</nova:memory>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <nova:disk>1</nova:disk>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <nova:swap>0</nova:swap>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <nova:ephemeral>0</nova:ephemeral>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <nova:vcpus>1</nova:vcpus>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      </nova:flavor>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <nova:owner>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <nova:user uuid="549def9aedaa41be8d41ae7c6e534303">tempest-MigrationsAdminTest-775661994-project-member</nova:user>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <nova:project uuid="98a3ce5a8a524b0d8327784d9df9a9db">tempest-MigrationsAdminTest-775661994</nova:project>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      </nova:owner>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <nova:root type="image" uuid="dc084f46-456d-429d-85f6-836af4fccd82"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <nova:ports/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    </nova:instance>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  </metadata>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  <sysinfo type="smbios">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <system>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <entry name="manufacturer">RDO</entry>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <entry name="product">OpenStack Compute</entry>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <entry name="serial">33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4</entry>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <entry name="uuid">33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4</entry>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <entry name="family">Virtual Machine</entry>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    </system>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  </sysinfo>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  <os>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <type arch="x86_64" machine="q35">hvm</type>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <boot dev="hd"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <smbios mode="sysinfo"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  </os>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  <features>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <acpi/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <apic/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <vmcoreinfo/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  </features>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  <clock offset="utc">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <timer name="pit" tickpolicy="delay"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <timer name="rtc" tickpolicy="catchup"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <timer name="hpet" present="no"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  </clock>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  <cpu mode="custom" match="exact">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <model>Nehalem</model>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <topology sockets="1" cores="1" threads="1"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  </cpu>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  <devices>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <disk type="network" device="disk">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <driver type="raw" cache="none"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <source protocol="rbd" name="vms/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      </source>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <auth username="openstack">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      </auth>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <target dev="vda" bus="virtio"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    </disk>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <disk type="network" device="cdrom">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <driver type="raw" cache="none"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <source protocol="rbd" name="vms/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk.config">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <host name="192.168.122.100" port="6789"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <host name="192.168.122.102" port="6789"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <host name="192.168.122.101" port="6789"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      </source>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <auth username="openstack">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:        <secret type="ceph" uuid="088fe176-0106-5401-803c-2da38b73b76a"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      </auth>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <target dev="sda" bus="sata"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    </disk>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <serial type="pty">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <log file="/var/lib/nova/instances/33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4/console.log" append="off"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    </serial>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <video>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <model type="virtio"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    </video>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <input type="tablet" bus="usb"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <rng model="virtio">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <backend model="random">/dev/urandom</backend>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    </rng>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="pci" model="pcie-root-port"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <controller type="usb" index="0"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    <memballoon model="virtio">
Jan 22 09:34:20 np0005592159 nova_compute[226433]:      <stats period="10"/>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:    </memballoon>
Jan 22 09:34:20 np0005592159 nova_compute[226433]:  </devices>
Jan 22 09:34:20 np0005592159 nova_compute[226433]: </domain>
Jan 22 09:34:20 np0005592159 nova_compute[226433]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.184 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.185 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.185 226437 INFO nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Using config drive#033[00m
Jan 22 09:34:20 np0005592159 systemd-machined[194970]: New machine qemu-7-instance-00000015.
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.285 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:20 np0005592159 systemd[1]: Started Virtual Machine qemu-7-instance-00000015.
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.678 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092460.6774466, 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.678 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] VM Resumed (Lifecycle Event)#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.680 226437 DEBUG nova.compute.manager [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.684 226437 INFO nova.virt.libvirt.driver [-] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance running successfully.#033[00m
Jan 22 09:34:20 np0005592159 virtqemud[225907]: argument unsupported: QEMU guest agent is not configured
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.697 226437 DEBUG nova.virt.libvirt.guest [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.697 226437 DEBUG nova.virt.libvirt.driver [None req-ce0c32e3-8171-4503-9e4b-9b20d38a6534 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] finish_migration finished successfully. finish_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11793#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.702 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.705 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.801 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] During sync_power_state the instance has a pending task (resize_finish). Skip.#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.801 226437 DEBUG nova.virt.driver [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] Emitting event <LifecycleEvent: 1769092460.6804512, 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.801 226437 INFO nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] VM Started (Lifecycle Event)#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.854 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Jan 22 09:34:20 np0005592159 nova_compute[226433]: 2026-01-22 14:34:20.858 226437 DEBUG nova.compute.manager [None req-af314bb3-0f7c-4ed8-b395-e38529225b86 - - - - - -] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: resize_finish, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Jan 22 09:34:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:20.876+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:20 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:20.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:21.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:21 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:21 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:21 np0005592159 nova_compute[226433]: 2026-01-22 14:34:21.602 226437 DEBUG nova.network.neutron [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Updated VIF entry in instance network info cache for port e581f563-3369-4b65-92c8-89785e787b51. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Jan 22 09:34:21 np0005592159 nova_compute[226433]: 2026-01-22 14:34:21.603 226437 DEBUG nova.network.neutron [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: 839e8e64-64a9-4e35-85dd-cdbb7f8e71c5] Updating instance_info_cache with network_info: [{"id": "e581f563-3369-4b65-92c8-89785e787b51", "address": "fa:16:3e:35:f2:b5", "network": {"id": "e70febd3-9995-42cd-a322-30bf5db3445d", "bridge": "br-int", "label": "tempest-AttachSCSIVolumeTestJSON-620022538-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f3ac78c8a3fa42b39e64829385672445", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape581f563-33", "ovs_interfaceid": "e581f563-3369-4b65-92c8-89785e787b51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:34:21 np0005592159 nova_compute[226433]: 2026-01-22 14:34:21.625 226437 DEBUG oslo_concurrency.lockutils [req-966a4883-7d7f-4c61-80f9-34aef602169e req-4045983b-a6f4-480a-940c-d41560ef7295 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Releasing lock "refresh_cache-839e8e64-64a9-4e35-85dd-cdbb7f8e71c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:34:21 np0005592159 nova_compute[226433]: 2026-01-22 14:34:21.715 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:34:21 np0005592159 nova_compute[226433]: 2026-01-22 14:34:21.716 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquired lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:34:21 np0005592159 nova_compute[226433]: 2026-01-22 14:34:21.716 226437 DEBUG nova.network.neutron [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:34:21 np0005592159 nova_compute[226433]: 2026-01-22 14:34:21.882 226437 DEBUG nova.network.neutron [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:34:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:21.907+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:21 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:22 np0005592159 nova_compute[226433]: 2026-01-22 14:34:22.118 226437 DEBUG nova.network.neutron [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:34:22 np0005592159 nova_compute[226433]: 2026-01-22 14:34:22.122 226437 DEBUG oslo_service.periodic_task [None req-59128fce-f882-44ca-aa98-af15dd1a733a - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Jan 22 09:34:22 np0005592159 nova_compute[226433]: 2026-01-22 14:34:22.162 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Releasing lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:34:22 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:22 np0005592159 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000015.scope: Deactivated successfully.
Jan 22 09:34:22 np0005592159 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000015.scope: Consumed 1.989s CPU time.
Jan 22 09:34:22 np0005592159 systemd-machined[194970]: Machine qemu-7-instance-00000015 terminated.
Jan 22 09:34:22 np0005592159 nova_compute[226433]: 2026-01-22 14:34:22.399 226437 INFO nova.virt.libvirt.driver [-] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance destroyed successfully.#033[00m
Jan 22 09:34:22 np0005592159 nova_compute[226433]: 2026-01-22 14:34:22.400 226437 DEBUG nova.objects.instance [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'resources' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:34:22 np0005592159 nova_compute[226433]: 2026-01-22 14:34:22.418 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:22 np0005592159 nova_compute[226433]: 2026-01-22 14:34:22.418 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:22 np0005592159 nova_compute[226433]: 2026-01-22 14:34:22.438 226437 DEBUG nova.objects.instance [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lazy-loading 'migration_context' on Instance uuid 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Jan 22 09:34:22 np0005592159 nova_compute[226433]: 2026-01-22 14:34:22.633 226437 DEBUG oslo_concurrency.processutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:22.951+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:22 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:22.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:23.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:34:23 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1366398213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:34:23 np0005592159 nova_compute[226433]: 2026-01-22 14:34:23.077 226437 DEBUG oslo_concurrency.processutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:23 np0005592159 nova_compute[226433]: 2026-01-22 14:34:23.083 226437 DEBUG nova.compute.provider_tree [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:34:23 np0005592159 nova_compute[226433]: 2026-01-22 14:34:23.106 226437 DEBUG nova.scheduler.client.report [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:34:23 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:23 np0005592159 nova_compute[226433]: 2026-01-22 14:34:23.319 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:23 np0005592159 nova_compute[226433]: 2026-01-22 14:34:23.497 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.drop_move_claim_at_dest" :: held 1.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:23 np0005592159 nova_compute[226433]: 2026-01-22 14:34:23.647 226437 INFO nova.compute.manager [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Swapping old allocation on dict_keys(['d4dcb68c-0009-4467-a6f7-0e9fe0236fbc']) held by migration b574b6ef-91e2-4c6d-ad4c-305ec4aedaa0 for instance#033[00m
Jan 22 09:34:23 np0005592159 nova_compute[226433]: 2026-01-22 14:34:23.687 226437 DEBUG nova.scheduler.client.report [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Overwriting current allocation {'allocations': {'d4dcb68c-0009-4467-a6f7-0e9fe0236fbc': {'resources': {'VCPU': 1, 'MEMORY_MB': 192, 'DISK_GB': 1}, 'generation': 20}}, 'project_id': '98a3ce5a8a524b0d8327784d9df9a9db', 'user_id': '549def9aedaa41be8d41ae7c6e534303', 'consumer_generation': 1} on consumer 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4 move_allocations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:2018#033[00m
Jan 22 09:34:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:23.927+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:23 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:24 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:24 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:24 np0005592159 nova_compute[226433]: 2026-01-22 14:34:24.359 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquiring lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:34:24 np0005592159 nova_compute[226433]: 2026-01-22 14:34:24.360 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Acquired lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:34:24 np0005592159 nova_compute[226433]: 2026-01-22 14:34:24.360 226437 DEBUG nova.network.neutron [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:34:24 np0005592159 nova_compute[226433]: 2026-01-22 14:34:24.554 226437 DEBUG nova.network.neutron [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:34:24 np0005592159 nova_compute[226433]: 2026-01-22 14:34:24.858 226437 DEBUG nova.network.neutron [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:34:24 np0005592159 nova_compute[226433]: 2026-01-22 14:34:24.875 226437 DEBUG oslo_concurrency.lockutils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] Releasing lock "refresh_cache-33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:34:24 np0005592159 nova_compute[226433]: 2026-01-22 14:34:24.877 226437 DEBUG nova.virt.libvirt.driver [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] [instance: 33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4] Starting finish_revert_migration finish_revert_migration /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11843#033[00m
Jan 22 09:34:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:24.942+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:24 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:24 np0005592159 nova_compute[226433]: 2026-01-22 14:34:24.984 226437 DEBUG nova.storage.rbd_utils [None req-a78178b3-0ea7-4d35-85a5-08af686a035b 549def9aedaa41be8d41ae7c6e534303 98a3ce5a8a524b0d8327784d9df9a9db - - default default] rolling back rbd image(33c1b5d1-1f50-4eb0-b606-26e3aa44c8c4_disk) to snapshot(nova-resize) rollback_to_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:505#033[00m
Jan 22 09:34:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:24.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:25.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:25 np0005592159 nova_compute[226433]: 2026-01-22 14:34:25.288 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:25.893+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:25 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:26 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 33 ])
Jan 22 09:34:26 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:26.925+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:26 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:26.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:27.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:27 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:27.923+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:27 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:28 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 3458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:28 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:28 np0005592159 nova_compute[226433]: 2026-01-22 14:34:28.350 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:28 np0005592159 nova_compute[226433]: 2026-01-22 14:34:28.585 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "f3b9aec5-45fa-4006-a7ca-285acc598bef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:28 np0005592159 nova_compute[226433]: 2026-01-22 14:34:28.586 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "f3b9aec5-45fa-4006-a7ca-285acc598bef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:28 np0005592159 nova_compute[226433]: 2026-01-22 14:34:28.602 226437 DEBUG nova.compute.manager [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Jan 22 09:34:28 np0005592159 nova_compute[226433]: 2026-01-22 14:34:28.658 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:28 np0005592159 nova_compute[226433]: 2026-01-22 14:34:28.659 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:28 np0005592159 nova_compute[226433]: 2026-01-22 14:34:28.666 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Jan 22 09:34:28 np0005592159 nova_compute[226433]: 2026-01-22 14:34:28.667 226437 INFO nova.compute.claims [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Claim successful on node compute-2.ctlplane.example.com#033[00m
Jan 22 09:34:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:28.928+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:28 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:28 np0005592159 nova_compute[226433]: 2026-01-22 14:34:28.991 226437 DEBUG oslo_concurrency.processutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:34:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:28.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:34:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:29.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:29 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Jan 22 09:34:29 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3535101709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.416 226437 DEBUG oslo_concurrency.processutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.422 226437 DEBUG nova.compute.provider_tree [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Inventory has not changed in ProviderTree for provider: d4dcb68c-0009-4467-a6f7-0e9fe0236fbc update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.453 226437 DEBUG nova.scheduler.client.report [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Inventory has not changed for provider d4dcb68c-0009-4467-a6f7-0e9fe0236fbc based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 20, 'reserved': 1, 'min_unit': 1, 'max_unit': 20, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.488 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.490 226437 DEBUG nova.compute.manager [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Jan 22 09:34:29 np0005592159 ovn_controller[133156]: 2026-01-22T14:34:29Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:35:f2:b5 10.100.0.11
Jan 22 09:34:29 np0005592159 ovn_controller[133156]: 2026-01-22T14:34:29Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:35:f2:b5 10.100.0.11
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.541 226437 DEBUG nova.compute.manager [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.541 226437 DEBUG nova.network.neutron [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.573 226437 INFO nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.601 226437 DEBUG nova.compute.manager [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.658 226437 INFO nova.virt.block_device [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Booting with volume e82f562e-a2cc-4c3f-b1a7-890d6620c280 at /dev/vda#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.776 226437 DEBUG nova.policy [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3b8229aedbc64b9691880a91d559e987', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7efa67e548af42419a603e06c3b85f6d', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.891 226437 DEBUG os_brick.utils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.102', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-2.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.894 248518 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.909 248518 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.910 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[2d03872c-c15d-4ed0-9c4d-0e65aff645f7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.912 248518 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.920 248518 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.921 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e49359-af62-4c3f-8e03-a0710e1a2fe2]: (4, ('InitiatorName=iqn.1994-05.com.redhat:5333c49f4ca5', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.923 248518 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.940 248518 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.940 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fbccdc-e88c-42cb-9f38-b1d64f499efa]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.943 248518 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0bef1a-39db-4e3f-b1c6-4a7e788a80e6]: (4, '5492a354-d192-4c48-8602-99be1884b049') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.944 226437 DEBUG oslo_concurrency.processutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Jan 22 09:34:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:29.973+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:29 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.979 226437 DEBUG oslo_concurrency.processutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.983 226437 DEBUG os_brick.initiator.connectors.lightos [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.983 226437 DEBUG os_brick.initiator.connectors.lightos [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.984 226437 DEBUG os_brick.initiator.connectors.lightos [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.985 226437 DEBUG os_brick.utils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] <== get_connector_properties: return (92ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.102', 'host': 'compute-2.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:5333c49f4ca5', 'do_local_attach': False, 'nvme_hostid': '5350774e-8b5e-4dba-80a9-92d405981c1d', 'system uuid': '5492a354-d192-4c48-8602-99be1884b049', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:5350774e-8b5e-4dba-80a9-92d405981c1d', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203#033[00m
Jan 22 09:34:29 np0005592159 nova_compute[226433]: 2026-01-22 14:34:29.985 226437 DEBUG nova.virt.block_device [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Updating existing volume attachment record: 8698cd44-8fb9-487d-b8fc-95b1321557d8 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631#033[00m
Jan 22 09:34:30 np0005592159 nova_compute[226433]: 2026-01-22 14:34:30.290 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:30 np0005592159 ovn_controller[133156]: 2026-01-22T14:34:30Z|00063|memory|INFO|peak resident set size grew 52% in last 2912.6 seconds, from 16256 kB to 24736 kB
Jan 22 09:34:30 np0005592159 ovn_controller[133156]: 2026-01-22T14:34:30Z|00064|memory|INFO|idl-cells-OVN_Southbound:10969 idl-cells-Open_vSwitch:927 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:365 lflow-cache-entries-cache-matches:292 lflow-cache-size-KB:1519 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:641 ofctrl_installed_flow_usage-KB:468 ofctrl_sb_flow_ref_usage-KB:241
Jan 22 09:34:30 np0005592159 nova_compute[226433]: 2026-01-22 14:34:30.533 226437 DEBUG nova.network.neutron [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Successfully created port: bf1e3b76-b4f9-4981-a960-f071d92bc35f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Jan 22 09:34:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:30.973+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:30 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:30.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:31.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.066 226437 DEBUG nova.compute.manager [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.069 226437 DEBUG nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.070 226437 INFO nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Creating image(s)#033[00m
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.071 226437 DEBUG nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859#033[00m
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.071 226437 DEBUG nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Ensure instance console log exists: /var/lib/nova/instances/f3b9aec5-45fa-4006-a7ca-285acc598bef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.072 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.073 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.073 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.285 226437 DEBUG nova.network.neutron [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Successfully updated port: bf1e3b76-b4f9-4981-a960-f071d92bc35f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.315 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquiring lock "refresh_cache-f3b9aec5-45fa-4006-a7ca-285acc598bef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.316 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Acquired lock "refresh_cache-f3b9aec5-45fa-4006-a7ca-285acc598bef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.317 226437 DEBUG nova.network.neutron [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Jan 22 09:34:31 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.424 226437 DEBUG nova.compute.manager [req-09d8414b-54ba-40ef-b4f2-3ed9ba4aa438 req-3136e030-cac4-4149-9681-72d943f31e28 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Received event network-changed-bf1e3b76-b4f9-4981-a960-f071d92bc35f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.424 226437 DEBUG nova.compute.manager [req-09d8414b-54ba-40ef-b4f2-3ed9ba4aa438 req-3136e030-cac4-4149-9681-72d943f31e28 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Refreshing instance network info cache due to event network-changed-bf1e3b76-b4f9-4981-a960-f071d92bc35f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.425 226437 DEBUG oslo_concurrency.lockutils [req-09d8414b-54ba-40ef-b4f2-3ed9ba4aa438 req-3136e030-cac4-4149-9681-72d943f31e28 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquiring lock "refresh_cache-f3b9aec5-45fa-4006-a7ca-285acc598bef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Jan 22 09:34:31 np0005592159 nova_compute[226433]: 2026-01-22 14:34:31.560 226437 DEBUG nova.network.neutron [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Jan 22 09:34:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:31.941+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:31 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:32 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:32.922+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:32 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:33.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:33 np0005592159 podman[252973]: 2026-01-22 14:34:33.050347714 +0000 UTC m=+0.087546216 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:34:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:33.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.353 226437 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 28 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.403 226437 DEBUG nova.network.neutron [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Updating instance_info_cache with network_info: [{"id": "bf1e3b76-b4f9-4981-a960-f071d92bc35f", "address": "fa:16:3e:8d:4d:dc", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1e3b76-b4", "ovs_interfaceid": "bf1e3b76-b4f9-4981-a960-f071d92bc35f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Jan 22 09:34:33 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:33 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 3463 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:33 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.451 226437 DEBUG oslo_concurrency.lockutils [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Releasing lock "refresh_cache-f3b9aec5-45fa-4006-a7ca-285acc598bef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.452 226437 DEBUG nova.compute.manager [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Instance network_info: |[{"id": "bf1e3b76-b4f9-4981-a960-f071d92bc35f", "address": "fa:16:3e:8d:4d:dc", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1e3b76-b4", "ovs_interfaceid": "bf1e3b76-b4f9-4981-a960-f071d92bc35f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.453 226437 DEBUG oslo_concurrency.lockutils [req-09d8414b-54ba-40ef-b4f2-3ed9ba4aa438 req-3136e030-cac4-4149-9681-72d943f31e28 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] Acquired lock "refresh_cache-f3b9aec5-45fa-4006-a7ca-285acc598bef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.454 226437 DEBUG nova.network.neutron [req-09d8414b-54ba-40ef-b4f2-3ed9ba4aa438 req-3136e030-cac4-4149-9681-72d943f31e28 43adff1334c842d2bbd6b7d8dae6cab7 ced9a3fc9c8a4a3dadc49b291f7b9b3b - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Refreshing network info cache for port bf1e3b76-b4f9-4981-a960-f071d92bc35f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.461 226437 DEBUG nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] [instance: f3b9aec5-45fa-4006-a7ca-285acc598bef] Start _get_guest_xml network_info=[{"id": "bf1e3b76-b4f9-4981-a960-f071d92bc35f", "address": "fa:16:3e:8d:4d:dc", "network": {"id": "2b0f60bf-d43c-499d-bf6b-aded338e0ecf", "bridge": "br-int", "label": "tempest-LiveAutoBlockMigrationV225Test-7019380-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7efa67e548af42419a603e06c3b85f6d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbf1e3b76-b4", "ovs_interfaceid": "bf1e3b76-b4f9-4981-a960-f071d92bc35f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'mount_device': '/dev/vda', 'device_type': 'disk', 'attachment_id': '8698cd44-8fb9-487d-b8fc-95b1321557d8', 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-e82f562e-a2cc-4c3f-b1a7-890d6620c280', 'hosts': ['192.168.122.100', '192.168.122.102', '192.168.122.101'], 'ports': ['6789', '6789', '6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'e82f562e-a2cc-4c3f-b1a7-890d6620c280', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'f3b9aec5-45fa-4006-a7ca-285acc598bef', 'attached_at': '', 'detached_at': '', 'volume_id': 'e82f562e-a2cc-4c3f-b1a7-890d6620c280', 'serial': 'e82f562e-a2cc-4c3f-b1a7-890d6620c280'}, 'guest_format': None, 'disk_bus': 'virtio', 'delete_on_termination': True, 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.469 226437 WARNING nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.484 226437 DEBUG nova.virt.libvirt.host [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.485 226437 DEBUG nova.virt.libvirt.host [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.489 226437 DEBUG nova.virt.libvirt.host [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.489 226437 DEBUG nova.virt.libvirt.host [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.491 226437 DEBUG nova.virt.libvirt.driver [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] CPU mode 'custom' models 'Nehalem' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.491 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-22T13:59:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='9033f773-5da0-41ea-80ee-6af3a54f1e68',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.492 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.492 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.492 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.492 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.493 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.493 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.493 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.494 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.494 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Jan 22 09:34:33 np0005592159 nova_compute[226433]: 2026-01-22 14:34:33.494 226437 DEBUG nova.virt.hardware [None req-34eb87e6-2213-4316-8175-f06c39b79e38 3b8229aedbc64b9691880a91d559e987 7efa67e548af42419a603e06c3b85f6d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Jan 22 09:34:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:33.952+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:33 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:34 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:34.922+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:35.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:35.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:35 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:35 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:35.932+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:35 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:36 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:36.905+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:36 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:34:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:37.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:34:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:34:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:37.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:34:37 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:37 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 3468 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:37.925+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:37 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:38 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:38.899+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:38 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:39.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:39.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:39 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:39.895+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:39 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:40 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:40.868+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:40 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:34:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:41.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:34:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:41.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:41 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:41.886+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:41 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:42 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:42.924+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:42 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:43.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:43.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:43 np0005592159 ovsdb-server[47215]: ovs|00005|reconnect|ERR|tcp:127.0.0.1:40134: no response to inactivity probe after 5 seconds, disconnecting
Jan 22 09:34:43 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 3473 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:43 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:43.968+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:43 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:44.984+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:44 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:45.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:45 np0005592159 podman[253066]: 2026-01-22 14:34:45.068893156 +0000 UTC m=+0.113348927 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 09:34:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:45.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:45 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:46.023+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:46 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:46 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:34:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:47.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:34:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:47.057+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:47 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:47.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:47.209 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:34:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:47.211 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:34:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:34:47.211 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:34:47 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:48.028+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:48 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:48 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:48.990+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:48 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:49.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:49.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:49 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:50.019+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:50 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:50 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:34:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:51.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:34:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:51.053+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:51 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:51.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:51 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:52.057+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:52 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:52 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:52 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 3478 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:53.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:53.037+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:53 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:53.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:54.061+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:54 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:54 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:55.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:55.079+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:55 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:55.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:55 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:55 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 10 ])
Jan 22 09:34:56 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:56.104+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:56 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:57.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:57.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:57.095+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:57 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:57 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:58.094+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:58 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:58 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:58 np0005592159 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 3488 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:34:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:34:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:34:59.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:34:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:34:59.066+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:59 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:34:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:34:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:34:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:34:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:34:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:34:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:34:59 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:00.046+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:00 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:00 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:00 np0005592159 ovn_controller[133156]: 2026-01-22T14:35:00Z|00065|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Jan 22 09:35:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:01.002+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:01 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:01.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:01.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:01 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:02.030+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:02 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:02 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:35:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:03.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:35:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:03.036+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:03 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:03.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:03 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:03 np0005592159 ceph-mon[77081]: Health check update: 18 slow ops, oldest one blocked for 3493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:04 np0005592159 podman[253153]: 2026-01-22 14:35:04.013615259 +0000 UTC m=+0.065731211 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:35:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:04.061+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:04 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:04 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 16 ])
Jan 22 09:35:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:05.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:05.088+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:05 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:05.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:05 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:06 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:06.086+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:06 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:07.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:07.089+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:07 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:07.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:07 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:08.061+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:08 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:08 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:08 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:09.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:09.062+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:09 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:09.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:09 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:09 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:09 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:10.100+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:10 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:10 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:35:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:35:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:11.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:11.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:11.117+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:11 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:11 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:12.100+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:12 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:12 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:13.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:13.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:13.114+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:13 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:13 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:13 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:14.118+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:14 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:14 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:15.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:15.097+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:15 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:15.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:15 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:16.054+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:16 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:16 np0005592159 podman[253428]: 2026-01-22 14:35:16.081165205 +0000 UTC m=+0.131345589 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:35:16 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:35:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:17.034+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:17 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:17.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:35:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:17.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:35:17 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:17 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:18.054+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:18 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:18 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:19.023+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:19 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:35:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:19.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:35:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:19.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:19 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:20 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:20.011+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:20 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:21.005+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:21 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:35:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:21.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:35:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:21.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:22.579+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:22 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:22 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:23.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:23.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:23 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:23 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:23.624+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:23 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:24 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:24.652+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:24 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:25.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:25.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:25 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:25.673+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:25 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:26 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 27 ])
Jan 22 09:35:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:26.655+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:26 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:27.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:27.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:27 np0005592159 ceph-mon[77081]: Health check update: 30 slow ops, oldest one blocked for 3518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:27 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:27.690+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:27 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:28.646+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:28 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:28 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:35:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 11K writes, 60K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 11K writes, 11K syncs, 1.00 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1931 writes, 9853 keys, 1931 commit groups, 1.0 writes per commit group, ingest: 16.84 MB, 0.03 MB/s#012Interval WAL: 1931 writes, 1931 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     79.2      0.81              0.22        36    0.023       0      0       0.0       0.0#012  L6      1/0    8.52 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.9    139.2    118.1      2.65              0.92        35    0.076    271K    19K       0.0       0.0#012 Sum      1/0    8.52 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.9    106.5    109.0      3.46              1.13        71    0.049    271K    19K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5    136.5    136.9      0.56              0.27        14    0.040     72K   3610       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    139.2    118.1      2.65              0.92        35    0.076    271K    19K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     79.6      0.81              0.22        35    0.023       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.063, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.37 GB write, 0.10 MB/s write, 0.36 GB read, 0.10 MB/s read, 3.5 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 40.78 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.00025 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2160,39.01 MB,12.8307%) FilterBlock(71,759.30 KB,0.243915%) IndexBlock(71,1.03 MB,0.340045%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:35:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:29.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:29.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:29.648+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:29 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:29 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:30.667+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:30 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:30 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:31.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:31.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:31 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:31.707+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:31 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:32.700+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:32 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:32 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:33.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:33.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:33.675+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:33 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:33 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 3523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:33 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:34.695+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:34 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:34 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:34 np0005592159 podman[253565]: 2026-01-22 14:35:34.98324485 +0000 UTC m=+0.049234529 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:35:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:35.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:35.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:35.683+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:35 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:35 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:36.633+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:36 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:36 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:37.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:37.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:37.656+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:37 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:37 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:38.625+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:38 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:38 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:39.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:39.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:39.579+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:39 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:39 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:39 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:40.533+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:40 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:40 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:41.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:41.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:41.494+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:41 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:41 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:42.452+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:42 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:43 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:43 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 3528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:43.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:43.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:43.445+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:43 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:44 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:44.400+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:44 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:45 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:35:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:45.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:35:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:45.363+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:45 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:46 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:46.390+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:46 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:47 np0005592159 podman[253641]: 2026-01-22 14:35:47.021864052 +0000 UTC m=+0.084083751 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:35:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:47.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:47 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:47.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:35:47.210 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:35:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:35:47.211 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:35:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:35:47.211 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:35:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:47.365+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:47 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:48 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:48 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 3537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:48.332+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:48 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:49.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:49 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:49.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:49.367+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:49 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:50 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:50.347+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:50 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:51.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:51.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:51 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:51.316+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:51 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:52 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:52.288+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:52 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:53.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:53.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:53 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:53 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 3542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:53.324+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:53 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:54 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:54.370+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:54 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:55.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:55.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:55 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:55.375+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:55 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:56 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:56.365+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:56 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:57.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:57.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:57 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:35:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:57.376+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:57 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:35:58 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:35:58 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 3547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:35:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:58.358+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:58 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:35:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:35:59.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:35:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:35:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:35:59.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:35:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:35:59 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:35:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:35:59.367+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:59 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:35:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:00 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:00.415+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:00 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:01.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:01.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:01.371+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:01 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:01 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:02.390+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:02 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #118. Immutable memtables: 0.
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.545676) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 118
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562545740, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1728, "num_deletes": 255, "total_data_size": 3241730, "memory_usage": 3312272, "flush_reason": "Manual Compaction"}
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #119: started
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562563052, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 119, "file_size": 2109887, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59075, "largest_seqno": 60798, "table_properties": {"data_size": 2103097, "index_size": 3605, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17067, "raw_average_key_size": 20, "raw_value_size": 2088285, "raw_average_value_size": 2546, "num_data_blocks": 155, "num_entries": 820, "num_filter_entries": 820, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092457, "oldest_key_time": 1769092457, "file_creation_time": 1769092562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 17505 microseconds, and 11035 cpu microseconds.
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.563175) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #119: 2109887 bytes OK
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.563208) [db/memtable_list.cc:519] [default] Level-0 commit table #119 started
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.565668) [db/memtable_list.cc:722] [default] Level-0 commit table #119: memtable #1 done
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.565694) EVENT_LOG_v1 {"time_micros": 1769092562565685, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.565721) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 3233627, prev total WAL file size 3233627, number of live WAL files 2.
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000115.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.568271) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353133' seq:72057594037927935, type:22 .. '6C6F676D0032373634' seq:0, type:0; will stop at (end)
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [119(2060KB)], [117(8722KB)]
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562568383, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [119], "files_L6": [117], "score": -1, "input_data_size": 11041381, "oldest_snapshot_seqno": -1}
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #120: 10538 keys, 10878022 bytes, temperature: kUnknown
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562662817, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 120, "file_size": 10878022, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10818607, "index_size": 31975, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26373, "raw_key_size": 285143, "raw_average_key_size": 27, "raw_value_size": 10637558, "raw_average_value_size": 1009, "num_data_blocks": 1202, "num_entries": 10538, "num_filter_entries": 10538, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092562, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 120, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.663967) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 10878022 bytes
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.665746) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.9 rd, 114.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 8.5 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(10.4) write-amplify(5.2) OK, records in: 11069, records dropped: 531 output_compression: NoCompression
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.665785) EVENT_LOG_v1 {"time_micros": 1769092562665768, "job": 74, "event": "compaction_finished", "compaction_time_micros": 95288, "compaction_time_cpu_micros": 60307, "output_level": 6, "num_output_files": 1, "total_output_size": 10878022, "num_input_records": 11069, "num_output_records": 10538, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562666731, "job": 74, "event": "table_file_deletion", "file_number": 119}
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000117.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092562669998, "job": 74, "event": "table_file_deletion", "file_number": 117}
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.567949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.670059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.670067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.670070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.670074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:02.670077) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:36:03.045 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:36:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:36:03.047 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:36:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:03.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:03.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:03.348+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:03 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:03 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:03 np0005592159 ceph-mon[77081]: Health check update: 21 slow ops, oldest one blocked for 3552 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:04.383+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:04 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:04 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:05.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:05.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:05.425+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:05 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:05 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:06 np0005592159 podman[253731]: 2026-01-22 14:36:06.010595209 +0000 UTC m=+0.064444405 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 22 09:36:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:06.438+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:06 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:06 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:07.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:07.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:07.445+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:07 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:07 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:07 np0005592159 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 09:36:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:08.417+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:08 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:08 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:08 np0005592159 ceph-mon[77081]: Health check update: 21 slow ops, oldest one blocked for 3557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:36:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:09.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:36:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:09.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:09.408+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:09 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:09 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:10.394+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:10 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:10 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:11 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:36:11.050 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:36:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:11.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:11.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:11.431+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:11 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:11 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:12.403+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:12 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:12 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:13.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:13.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:13.437+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:13 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:13 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:13 np0005592159 ceph-mon[77081]: Health check update: 21 slow ops, oldest one blocked for 3562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:14.415+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:14 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:14 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:14 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:15.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:15.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:15.444+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:15 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:15 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:16.395+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:16 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:16 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:17.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:17.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:17 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:17.348+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:18 np0005592159 podman[253888]: 2026-01-22 14:36:18.046799231 +0000 UTC m=+0.098516568 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Jan 22 09:36:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:18.392+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:18 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:18 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:18 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:18 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:18 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:36:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:19.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:19.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:19.357+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:19 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:19 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:36:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:20.371+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:20 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:20 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:20 np0005592159 ovn_controller[133156]: 2026-01-22T14:36:20Z|00066|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 09:36:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:21.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:21.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:21.371+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:21 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:21 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:22.528+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:22 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:23 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:23 np0005592159 ceph-mon[77081]: Health check update: 21 slow ops, oldest one blocked for 3567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:23.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:23.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:23.537+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:23 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:24 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:24 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:24.515+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:24 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:25.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:25.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:25 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:25 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:25 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:36:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:25.500+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:25 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:26 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:26.464+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:26 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:36:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:27.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:36:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:27.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:27 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 19 ])
Jan 22 09:36:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:27.494+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:27 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:28.529+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:28 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:28 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:28 np0005592159 ceph-mon[77081]: Health check update: 21 slow ops, oldest one blocked for 3578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:29.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:29.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:29.544+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:29 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:30 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:30.564+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:30 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:31 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:31 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:31.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:36:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.5 total, 600.0 interval#012Cumulative writes: 9301 writes, 35K keys, 9301 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 9301 writes, 2538 syncs, 3.66 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1397 writes, 4636 keys, 1397 commit groups, 1.0 writes per commit group, ingest: 4.06 MB, 0.01 MB/s#012Interval WAL: 1397 writes, 614 syncs, 2.28 writes per sync, written: 0.00 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:36:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:31.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:31.589+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:31 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:32 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:32.544+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:32 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:33.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:33 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:33.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:33.520+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:33 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:34 np0005592159 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 3583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:34 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:34.497+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:34 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:35.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:35 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:35.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:35.538+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:35 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:36 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:36.566+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:36 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:37 np0005592159 podman[254025]: 2026-01-22 14:36:37.03804764 +0000 UTC m=+0.084289506 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 09:36:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:37.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:37 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:37.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:37.549+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:37 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:38 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:38.540+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:38 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:39.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:39 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:39.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:39.561+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:39 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:40 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:40.536+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:40 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:41.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:41.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:41 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:41.511+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:41 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:42 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:42.497+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:42 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:36:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:43.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:36:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:43.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:43 np0005592159 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 3593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:43 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:43.486+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:43 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:44 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:44.492+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:44 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:36:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:45.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:36:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:45.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:45 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:45 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:45.493+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:46 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:46 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:36:46.459+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:36:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:47.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:36:47.212 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:36:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:36:47.214 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:36:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:36:47.214 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:36:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:47.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:47 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:36:48 np0005592159 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 3598 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:48 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:36:48.698 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:36:48 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:36:48.700 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:36:48 np0005592159 ovn_controller[133156]: 2026-01-22T14:36:48Z|00067|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 09:36:49 np0005592159 podman[254100]: 2026-01-22 14:36:49.059080915 +0000 UTC m=+0.113395746 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 22 09:36:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:49.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:49.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:51.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:51.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:51 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:36:51.702 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:36:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:53.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:53.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:53 np0005592159 ceph-mon[77081]: Health check update: 0 slow ops, oldest one blocked for 3602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:55.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:55.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:57.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:36:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:57.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: Health check update: 0 slow ops, oldest one blocked for 3607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #121. Immutable memtables: 0.
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.356001) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 121
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618357970, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 969, "num_deletes": 251, "total_data_size": 1656180, "memory_usage": 1681392, "flush_reason": "Manual Compaction"}
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #122: started
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618368777, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 122, "file_size": 1077454, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 60803, "largest_seqno": 61767, "table_properties": {"data_size": 1073224, "index_size": 1818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10654, "raw_average_key_size": 20, "raw_value_size": 1064219, "raw_average_value_size": 2027, "num_data_blocks": 79, "num_entries": 525, "num_filter_entries": 525, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092563, "oldest_key_time": 1769092563, "file_creation_time": 1769092618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 12268 microseconds, and 6604 cpu microseconds.
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.368841) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #122: 1077454 bytes OK
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.368871) [db/memtable_list.cc:519] [default] Level-0 commit table #122 started
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.370919) [db/memtable_list.cc:722] [default] Level-0 commit table #122: memtable #1 done
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.370942) EVENT_LOG_v1 {"time_micros": 1769092618370934, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.370970) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 1651234, prev total WAL file size 1651234, number of live WAL files 2.
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000118.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.372121) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [122(1052KB)], [120(10MB)]
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618372699, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [122], "files_L6": [120], "score": -1, "input_data_size": 11955476, "oldest_snapshot_seqno": -1}
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #123: 10548 keys, 10380298 bytes, temperature: kUnknown
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618458427, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 123, "file_size": 10380298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10321110, "index_size": 31684, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26437, "raw_key_size": 286335, "raw_average_key_size": 27, "raw_value_size": 10140182, "raw_average_value_size": 961, "num_data_blocks": 1185, "num_entries": 10548, "num_filter_entries": 10548, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 123, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.458704) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 10380298 bytes
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.461430) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.3 rd, 121.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(20.7) write-amplify(9.6) OK, records in: 11063, records dropped: 515 output_compression: NoCompression
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.461452) EVENT_LOG_v1 {"time_micros": 1769092618461440, "job": 76, "event": "compaction_finished", "compaction_time_micros": 85814, "compaction_time_cpu_micros": 35004, "output_level": 6, "num_output_files": 1, "total_output_size": 10380298, "num_input_records": 11063, "num_output_records": 10548, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618461837, "job": 76, "event": "table_file_deletion", "file_number": 122}
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000120.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092618464449, "job": 76, "event": "table_file_deletion", "file_number": 120}
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.372068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.464521) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.464530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.464534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.464537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:58 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:36:58.464540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:36:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:36:59.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:36:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:36:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:36:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:36:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:36:59.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:01.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:01.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:03.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:03.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:03 np0005592159 ceph-mon[77081]: Health check update: 0 slow ops, oldest one blocked for 3612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:05.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:05.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:07.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:07.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:08 np0005592159 podman[254186]: 2026-01-22 14:37:08.019191381 +0000 UTC m=+0.072275929 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:37:08 np0005592159 ceph-mon[77081]: Health check update: 0 slow ops, oldest one blocked for 3618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:09.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:09.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:11.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:11.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:12 np0005592159 ceph-mon[77081]: Health check update: 0 slow ops, oldest one blocked for 3622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:13.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 09:37:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:13.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 09:37:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.004000099s ======
Jan 22 09:37:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:15.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000099s
Jan 22 09:37:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:15.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:17.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:17.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:17.412+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:17 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:18 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:18 np0005592159 ceph-mon[77081]: Health check update: 0 slow ops, oldest one blocked for 3627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:18.433+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:18 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:37:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2148974794' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:37:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:37:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2148974794' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:37:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:19.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:19.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:19 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:19.392+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:19 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:20 np0005592159 podman[254216]: 2026-01-22 14:37:20.024104189 +0000 UTC m=+0.084559983 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 09:37:20 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:20.401+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:20 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:21.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:21.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:21 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:21.402+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:21 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:21 np0005592159 ovn_controller[133156]: 2026-01-22T14:37:21Z|00068|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 22 09:37:22 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:22.427+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:22 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:23.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:23.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:23 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:23 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:23.469+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:23 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:24 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:24.453+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:24 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:25.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:25.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:25.440+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:25 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:25 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:26.411+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:26 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:26 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:27.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:27.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:27.422+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:27 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:27 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:27 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:37:27 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:27 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:37:27 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:37:27.752 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:37:27 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:37:27.754 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:37:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:28.442+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:28 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:28 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:28 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3637 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:28 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:37:28.757 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:37:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:29.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:29.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:29.474+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:29 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:29 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:30.514+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:30 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:30 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:30 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:31.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:37:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:31.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:37:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:31.530+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:31 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:31 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:32.520+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:32 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:32 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:32 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3642 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:37:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:33.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:37:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:33.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:33.561+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:33 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:33 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:34.576+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:34 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:37:35 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:37:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:35.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:37:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:35.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:35.594+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:35 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:36 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:36.545+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:36 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:37 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:37:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:37.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:37:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:37:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:37.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:37:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:37.553+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:37 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:38 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:38 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:38.563+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:38 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:39 np0005592159 podman[254487]: 2026-01-22 14:37:39.009743173 +0000 UTC m=+0.065478604 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:37:39 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:39.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:39.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:39.514+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:39 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:40 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:40.534+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:40 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:41.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:41 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:41.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:41.525+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:41 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:42 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:42.508+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:42 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:43.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:37:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:43.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:37:43 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:43 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:43 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:43.479+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:44 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:44 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:44.450+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:37:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:45.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:37:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:45.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:45 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:45.407+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:45 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:46.366+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:46 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:46 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:37:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:47.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:37:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:37:47.213 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:37:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:37:47.213 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:37:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:37:47.214 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:37:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:47.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:47.392+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:47 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:47 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:48.414+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:48 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:48 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:48 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3657 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:48 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:37:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:49.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:37:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:49.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:49.432+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:49 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:49 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:50.459+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:50 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:50 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:51 np0005592159 podman[254565]: 2026-01-22 14:37:51.08340331 +0000 UTC m=+0.132515223 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 09:37:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:51.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:51.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:51.470+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:51 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:51 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:52.464+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:52 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:52 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:52 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:53.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:53.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:53.507+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:53 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:53 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:54.466+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:54 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:54 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 22 09:37:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:55.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 22 09:37:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:55.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:55.504+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:55 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:55 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:56.481+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:56 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:56 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:57.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:57.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:57.449+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:57 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:58 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:58 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:37:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:58.497+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:58 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:59 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:37:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:37:59.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:37:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:37:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:37:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:37:59.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:37:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:37:59.470+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:59 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:37:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:00 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:00.470+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:00 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:01 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:01.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:01.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:01.445+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:01 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:02 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:02.428+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:02 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:03.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:03.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:03 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:03 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:03 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:03.410+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:04 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:04.365+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:04 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:05.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:05.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:05 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:05.405+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:05 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:06 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:06.397+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:06 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:38:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:07.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:38:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:07.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:07.416+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:07 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:07 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:08.382+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:08 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:08 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:08 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:38:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:09.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:38:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:09.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:09.380+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:09 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:09 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:10 np0005592159 podman[254651]: 2026-01-22 14:38:10.033220282 +0000 UTC m=+0.080838982 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 09:38:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:10.343+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:10 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:10 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:10 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:11.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:11.355+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:11 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:11.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:11 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:12.397+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:12 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:12 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:12 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:13.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:13.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:13.418+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:13 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:13 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:14.442+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:14 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:14 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:15.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:15.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:15.398+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:15 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:15 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:16.431+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:16 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:16 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:17.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:17.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:17.460+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:17 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:17 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:17 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:38:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/104070897' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:38:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:38:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/104070897' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:38:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:18.474+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:18 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:18 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:38:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:19.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:38:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:38:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:19.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:38:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:19.462+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:19 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:19 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:20.445+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:20 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:20 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:21.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:21 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:38:21.327 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:38:21 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:38:21.328 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:38:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:21.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:21.433+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:21 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:21 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:22 np0005592159 podman[254679]: 2026-01-22 14:38:22.008254539 +0000 UTC m=+0.073915735 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 09:38:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:22.407+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:22 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:22 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:22 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:23.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:23.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:23.411+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:23 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:23 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:24.409+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:24 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:24 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:25.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:25.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:25.402+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:25 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:25 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:26.422+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:26 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:26 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:27.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:27.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:27.419+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:27 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:27 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:27 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:28 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:38:28.332 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:38:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:28.371+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:28 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:28 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:29.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:29.376+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:29 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:30.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:30 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:30.425+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:30 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:38:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:31.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:38:31 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:31.452+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:31 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:32.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:32.410+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:32 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:32 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:33.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:33.446+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:33 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:33 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:33 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:34.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:34 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:34.491+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:34 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:35.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:35 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:35.474+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:35 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:38:36 np0005592159 podman[255058]: 2026-01-22 14:38:36.072602668 +0000 UTC m=+0.098964688 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:38:36 np0005592159 podman[255058]: 2026-01-22 14:38:36.204358488 +0000 UTC m=+0.230720578 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Jan 22 09:38:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:38:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:36.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:38:36 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:36.442+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:36 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:38:37 np0005592159 podman[255210]: 2026-01-22 14:38:37.025449537 +0000 UTC m=+0.070930819 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 09:38:37 np0005592159 podman[255210]: 2026-01-22 14:38:37.037739466 +0000 UTC m=+0.083220718 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 09:38:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:37.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:37 np0005592159 podman[255273]: 2026-01-22 14:38:37.328599491 +0000 UTC m=+0.069908938 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, vcs-type=git, version=2.2.4, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.buildah.version=1.28.2, description=keepalived for Ceph, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, com.redhat.component=keepalived-container)
Jan 22 09:38:37 np0005592159 podman[255273]: 2026-01-22 14:38:37.346164358 +0000 UTC m=+0.087473765 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, version=2.2.4, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., name=keepalived)
Jan 22 09:38:37 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:37.415+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:37 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:37 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:37 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:38:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:38.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:38:38 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:38.465+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:38 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:38 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:39 np0005592159 podman[255583]: 2026-01-22 14:38:39.115362498 +0000 UTC m=+0.060671301 container create fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 09:38:39 np0005592159 systemd[1]: Started libpod-conmon-fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b.scope.
Jan 22 09:38:39 np0005592159 podman[255583]: 2026-01-22 14:38:39.08509255 +0000 UTC m=+0.030401423 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:38:39 np0005592159 systemd[1]: Started libcrun container.
Jan 22 09:38:39 np0005592159 podman[255583]: 2026-01-22 14:38:39.212905904 +0000 UTC m=+0.158214747 container init fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Jan 22 09:38:39 np0005592159 podman[255583]: 2026-01-22 14:38:39.221224864 +0000 UTC m=+0.166534017 container start fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 22 09:38:39 np0005592159 podman[255583]: 2026-01-22 14:38:39.224647046 +0000 UTC m=+0.169955849 container attach fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:38:39 np0005592159 strange_galileo[255601]: 167 167
Jan 22 09:38:39 np0005592159 systemd[1]: libpod-fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b.scope: Deactivated successfully.
Jan 22 09:38:39 np0005592159 podman[255583]: 2026-01-22 14:38:39.229840522 +0000 UTC m=+0.175149345 container died fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 09:38:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:39.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:39 np0005592159 systemd[1]: var-lib-containers-storage-overlay-98ff53d928a7b3a205bb53807d47c18e0a7b8b6bd703d386a0db75c97623ca22-merged.mount: Deactivated successfully.
Jan 22 09:38:39 np0005592159 podman[255583]: 2026-01-22 14:38:39.292027168 +0000 UTC m=+0.237335991 container remove fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 09:38:39 np0005592159 systemd[1]: libpod-conmon-fc8ba34a1f5e473c8750f13cae00ad16cfe1249acbf940d4a50bee85ddc76e2b.scope: Deactivated successfully.
Jan 22 09:38:39 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:39.475+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:39 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:39 np0005592159 podman[255624]: 2026-01-22 14:38:39.554826201 +0000 UTC m=+0.057662501 container create 03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Jan 22 09:38:39 np0005592159 systemd[1]: Started libpod-conmon-03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0.scope.
Jan 22 09:38:39 np0005592159 podman[255624]: 2026-01-22 14:38:39.53214235 +0000 UTC m=+0.034978730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 09:38:39 np0005592159 systemd[1]: Started libcrun container.
Jan 22 09:38:39 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/264aa49ddd4366b4f3f6d1f971221c3c7f212c76cdced6b4c525a740c975b46d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:39 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/264aa49ddd4366b4f3f6d1f971221c3c7f212c76cdced6b4c525a740c975b46d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:39 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/264aa49ddd4366b4f3f6d1f971221c3c7f212c76cdced6b4c525a740c975b46d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:39 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/264aa49ddd4366b4f3f6d1f971221c3c7f212c76cdced6b4c525a740c975b46d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 09:38:39 np0005592159 podman[255624]: 2026-01-22 14:38:39.672032767 +0000 UTC m=+0.174869077 container init 03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Jan 22 09:38:39 np0005592159 podman[255624]: 2026-01-22 14:38:39.680447729 +0000 UTC m=+0.183284029 container start 03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 22 09:38:39 np0005592159 podman[255624]: 2026-01-22 14:38:39.684584933 +0000 UTC m=+0.187421243 container attach 03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Jan 22 09:38:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:38:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:40.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:38:40 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:40.482+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:40 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]: [
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:    {
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:        "available": false,
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:        "ceph_device": false,
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:        "lsm_data": {},
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:        "lvs": [],
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:        "path": "/dev/sr0",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:        "rejected_reasons": [
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "Insufficient space (<5GB)",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "Has a FileSystem"
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:        ],
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:        "sys_api": {
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "actuators": null,
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "device_nodes": "sr0",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "devname": "sr0",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "human_readable_size": "482.00 KB",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "id_bus": "ata",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "model": "QEMU DVD-ROM",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "nr_requests": "2",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "parent": "/dev/sr0",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "partitions": {},
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "path": "/dev/sr0",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "removable": "1",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "rev": "2.5+",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "ro": "0",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "rotational": "1",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "sas_address": "",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "sas_device_handle": "",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "scheduler_mode": "mq-deadline",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "sectors": 0,
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "sectorsize": "2048",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "size": 493568.0,
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "support_discard": "2048",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "type": "disk",
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:            "vendor": "QEMU"
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:        }
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]:    }
Jan 22 09:38:40 np0005592159 nifty_hypatia[255641]: ]
Jan 22 09:38:40 np0005592159 podman[255624]: 2026-01-22 14:38:40.918201918 +0000 UTC m=+1.421038268 container died 03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 22 09:38:40 np0005592159 systemd[1]: libpod-03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0.scope: Deactivated successfully.
Jan 22 09:38:40 np0005592159 systemd[1]: libpod-03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0.scope: Consumed 1.262s CPU time.
Jan 22 09:38:40 np0005592159 systemd[1]: var-lib-containers-storage-overlay-264aa49ddd4366b4f3f6d1f971221c3c7f212c76cdced6b4c525a740c975b46d-merged.mount: Deactivated successfully.
Jan 22 09:38:40 np0005592159 podman[255624]: 2026-01-22 14:38:40.989990702 +0000 UTC m=+1.492827002 container remove 03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 09:38:41 np0005592159 systemd[1]: libpod-conmon-03b379f21837d2226640dae6efd214047482455b6d8811c4e3b334103a177ca0.scope: Deactivated successfully.
Jan 22 09:38:41 np0005592159 podman[256853]: 2026-01-22 14:38:41.054246679 +0000 UTC m=+0.108390752 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 22 09:38:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:38:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:41.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:38:41 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:41.494+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:41 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:38:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:38:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:42.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:42 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:42.507+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:42 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:43.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:43 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:43.519+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:43 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:43 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3712 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:44.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:44 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:44.547+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:44 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:45.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:45 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:45.524+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:45 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:38:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:46.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:38:46 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:46.523+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:46 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:38:47.215 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:38:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:38:47.216 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:38:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:38:47.216 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:38:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:47.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:47 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:47.550+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:47 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:38:47 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:47 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:38:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:48.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:38:48 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:48.543+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:48 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:38:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:49.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:38:49 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:49.562+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:49 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:38:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:50.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:38:50 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:50.548+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:50 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:38:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:51.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:38:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:51.525+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:51 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:51 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:38:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:52.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:38:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:52.555+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:52 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:52 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:52 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:53 np0005592159 ovn_controller[133156]: 2026-01-22T14:38:53Z|00069|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 09:38:53 np0005592159 podman[256992]: 2026-01-22 14:38:53.141812427 +0000 UTC m=+0.191688651 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:38:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:53.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:53.582+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:53 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:53 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:54.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:54.616+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:54 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:54 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:54 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 09:38:54 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 09:38:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:55.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:55.572+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:55 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:55 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:56.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:56.585+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:56 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:56 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:38:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:57.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:38:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:57.587+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:57 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:57 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:57 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:38:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:38:58.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:58.599+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:58 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:59 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:38:59 np0005592159 ovn_controller[133156]: 2026-01-22T14:38:59Z|00070|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 09:38:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:38:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:38:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:38:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:38:59.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:38:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:38:59.593+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:59 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:38:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:00 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:39:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:00.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:39:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:00.576+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:00 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:01 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:01.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:01.550+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:01 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:02 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:39:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:02.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:39:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:02.563+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:02 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:03 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:03 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:03.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:03.521+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:03 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:04 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:39:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:04.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:39:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:04.567+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:04 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:05 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:05.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:05.580+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:05 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:06 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:39:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:06.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:39:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:06.619+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:06 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:07 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:07.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:07.659+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:07 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:08 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:08 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:39:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:08.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:39:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:08.624+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:08 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:09 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:09.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:09 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:09.614+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:10 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:10.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:10 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:10.651+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:11 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:11.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:11 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:11.613+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:11 np0005592159 podman[257078]: 2026-01-22 14:39:11.988299984 +0000 UTC m=+0.046532767 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:39:12 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:12.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:12 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:12.609+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:13 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3743 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:13 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:39:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:13.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:39:13 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:13.595+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:14.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:14 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:14 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:14.593+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:39:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:15.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:39:15 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:15 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:15.545+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:16.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:16 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:16 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:16.530+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:17.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:17 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:17 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:17.554+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:18.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:18 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:18 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:18 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:18.589+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:19.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:19 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:19 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:19.574+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:20.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:20 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:20 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:20.551+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:21.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:22 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:22.483+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:22.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:22 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:23 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:39:23.127 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:39:23 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:39:23.128 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:39:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:39:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:23.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:39:23 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:23.477+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:23 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:23 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:24 np0005592159 podman[257103]: 2026-01-22 14:39:24.051532859 +0000 UTC m=+0.110762913 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:39:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:24 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:24.480+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:24.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:24 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:25.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:25 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:25.464+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:25 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:26 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:26.464+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:26.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:26 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:27.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:27 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:27.433+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:27 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:28 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:28.481+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:28.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:28 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:28 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:29.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:29 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:29.529+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:29 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:39:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:30.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:39:30 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:30.532+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:30 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:30 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:31 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:39:31.130 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:39:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:39:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:31.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:39:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:31.508+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 43 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:31 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 43 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 43 slow requests (by type [ 'delayed' : 43 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:39:31 np0005592159 ceph-mon[77081]: 43 slow requests (by type [ 'delayed' : 43 ] most affected pool [ 'vms' : 35 ])
Jan 22 09:39:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:32.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:32.508+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:32 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:32 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:32 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3763 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:33.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:33.494+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:33 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:33 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000030s ======
Jan 22 09:39:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:34.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 22 09:39:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:34.543+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:34 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:35 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:35.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:35.570+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:35 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:36 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:36.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:36.577+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:36 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:37 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:37.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:37.622+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:37 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:38 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:38 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:39:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:38.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:39:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:38.623+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:38 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:39 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:39.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:39.607+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:39 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:40 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:39:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:40.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:39:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:40.628+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:40 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:41 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:41.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:41.666+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:41 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:42 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:42.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:42.674+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:42 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:43 np0005592159 podman[257193]: 2026-01-22 14:39:43.040498323 +0000 UTC m=+0.088253810 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:39:43 np0005592159 ovn_controller[133156]: 2026-01-22T14:39:43Z|00071|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 09:39:43 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:43 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:43.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:43 np0005592159 ovn_controller[133156]: 2026-01-22T14:39:43Z|00072|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 09:39:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:43.647+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:43 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:44 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:44.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:44.682+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:44 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #124. Immutable memtables: 0.
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.272527) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 124
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785272582, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2432, "num_deletes": 251, "total_data_size": 4877991, "memory_usage": 4950528, "flush_reason": "Manual Compaction"}
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #125: started
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785299699, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 125, "file_size": 3171350, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61772, "largest_seqno": 64199, "table_properties": {"data_size": 3162263, "index_size": 5325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22746, "raw_average_key_size": 21, "raw_value_size": 3142323, "raw_average_value_size": 2939, "num_data_blocks": 228, "num_entries": 1069, "num_filter_entries": 1069, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092619, "oldest_key_time": 1769092619, "file_creation_time": 1769092785, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 27396 microseconds, and 15068 cpu microseconds.
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.299918) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #125: 3171350 bytes OK
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.300014) [db/memtable_list.cc:519] [default] Level-0 commit table #125 started
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.301916) [db/memtable_list.cc:722] [default] Level-0 commit table #125: memtable #1 done
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.301942) EVENT_LOG_v1 {"time_micros": 1769092785301933, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.301971) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 4867040, prev total WAL file size 4867040, number of live WAL files 2.
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000121.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.304962) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [125(3097KB)], [123(10137KB)]
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785305056, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [125], "files_L6": [123], "score": -1, "input_data_size": 13551648, "oldest_snapshot_seqno": -1}
Jan 22 09:39:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:45.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #126: 11098 keys, 11910942 bytes, temperature: kUnknown
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785400548, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 126, "file_size": 11910942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11847322, "index_size": 34772, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 299403, "raw_average_key_size": 26, "raw_value_size": 11655659, "raw_average_value_size": 1050, "num_data_blocks": 1311, "num_entries": 11098, "num_filter_entries": 11098, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092785, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 126, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.400871) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11910942 bytes
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.402620) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.8 rd, 124.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 9.9 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(8.0) write-amplify(3.8) OK, records in: 11617, records dropped: 519 output_compression: NoCompression
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.402637) EVENT_LOG_v1 {"time_micros": 1769092785402629, "job": 78, "event": "compaction_finished", "compaction_time_micros": 95575, "compaction_time_cpu_micros": 52514, "output_level": 6, "num_output_files": 1, "total_output_size": 11910942, "num_input_records": 11617, "num_output_records": 11098, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785403164, "job": 78, "event": "table_file_deletion", "file_number": 125}
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000123.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092785404945, "job": 78, "event": "table_file_deletion", "file_number": 123}
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.304851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.405056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.405063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.405065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.405066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:39:45.405068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:39:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:45.660+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:45 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:46 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:46 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:46.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:46.698+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:46 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:39:47.216 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:39:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:39:47.217 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:39:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:39:47.218 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:39:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:47.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:47.663+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:47 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:48 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:48 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:48 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:48.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:48.665+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:48 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:49.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:49 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:39:49 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:39:49 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:39:49 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:49.688+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:49 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:50.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:50.659+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:50 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:39:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:51.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:39:51 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:51 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:51 np0005592159 ovn_controller[133156]: 2026-01-22T14:39:51Z|00073|binding|INFO|Releasing lport 3c983055-ff9e-4976-9d9f-e2b4b8598736 from this chassis (sb_readonly=0)
Jan 22 09:39:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:51.651+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:51 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:52 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:52.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:52.671+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:52 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:53.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:53 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:53 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:53.660+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:53 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:54 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:39:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:54.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:39:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:54.698+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:54 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:55 np0005592159 podman[257400]: 2026-01-22 14:39:55.080646218 +0000 UTC m=+0.133320654 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:39:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:39:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:55.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:39:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:39:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:39:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:55.654+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:55 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:56.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:56 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:56 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:56.647+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:56 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:57.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:57 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:57.670+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:57 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:39:58.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:58 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:39:58 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:58 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:58.715+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:39:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:39:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:39:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:39:59.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:39:59 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:39:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:39:59.724+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:39:59 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:00.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:00 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:00.679+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:00 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:00 np0005592159 ceph-mon[77081]: Health detail: HEALTH_WARN 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops
Jan 22 09:40:00 np0005592159 ceph-mon[77081]: [WRN] SLOW_OPS: 44 slow ops, oldest one blocked for 3788 sec, osd.2 has slow ops
Jan 22 09:40:00 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:01.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:01 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:01.666+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:01 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:02.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:02 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:02.694+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:02 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.328 143497 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port d6334cad-de94-4b67-9127-1d06fa285533 with type ""#033[00m
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.330 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:f2:b5 10.100.0.11'], port_security=['fa:16:3e:35:f2:b5 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '839e8e64-64a9-4e35-85dd-cdbb7f8e71c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e70febd3-9995-42cd-a322-30bf5db3445d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f3ac78c8a3fa42b39e64829385672445', 'neutron:revision_number': '4', 'neutron:security_group_ids': '28729834-6047-40c0-87ed-a5757ce1c57a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8526bd5b-b1c9-4a14-b4ce-8f8562154268, chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff0fc0eb7c0>], logical_port=e581f563-3369-4b65-92c8-89785e787b51) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.331 143497 INFO neutron.agent.ovn.metadata.agent [-] Port e581f563-3369-4b65-92c8-89785e787b51 in datapath e70febd3-9995-42cd-a322-30bf5db3445d unbound from our chassis#033[00m
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.333 143497 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e70febd3-9995-42cd-a322-30bf5db3445d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.335 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[3fc4c367-a858-42f9-ac4c-fa4fb14c83a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.335 143497 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d namespace which is not needed anymore#033[00m
Jan 22 09:40:03 np0005592159 ovn_controller[133156]: 2026-01-22T14:40:03Z|00074|binding|INFO|Removing iface tape581f563-33 ovn-installed in OVS
Jan 22 09:40:03 np0005592159 ovn_controller[133156]: 2026-01-22T14:40:03Z|00075|binding|INFO|Removing lport e581f563-3369-4b65-92c8-89785e787b51 ovn-installed in OVS
Jan 22 09:40:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:03.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:03 np0005592159 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [NOTICE]   (252633) : haproxy version is 2.8.14-c23fe91
Jan 22 09:40:03 np0005592159 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [NOTICE]   (252633) : path to executable is /usr/sbin/haproxy
Jan 22 09:40:03 np0005592159 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [WARNING]  (252633) : Exiting Master process...
Jan 22 09:40:03 np0005592159 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [WARNING]  (252633) : Exiting Master process...
Jan 22 09:40:03 np0005592159 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [ALERT]    (252633) : Current worker (252635) exited with code 143 (Terminated)
Jan 22 09:40:03 np0005592159 neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d[252629]: [WARNING]  (252633) : All workers exited. Exiting... (0)
Jan 22 09:40:03 np0005592159 systemd[1]: libpod-43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857.scope: Deactivated successfully.
Jan 22 09:40:03 np0005592159 podman[257497]: 2026-01-22 14:40:03.506984423 +0000 UTC m=+0.052948916 container died 43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 09:40:03 np0005592159 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857-userdata-shm.mount: Deactivated successfully.
Jan 22 09:40:03 np0005592159 systemd[1]: var-lib-containers-storage-overlay-32d345afaa304af39e2e2833fda5b6655c176308d120bb6c3c940577074f3c39-merged.mount: Deactivated successfully.
Jan 22 09:40:03 np0005592159 podman[257497]: 2026-01-22 14:40:03.561837007 +0000 UTC m=+0.107801470 container cleanup 43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 09:40:03 np0005592159 systemd[1]: libpod-conmon-43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857.scope: Deactivated successfully.
Jan 22 09:40:03 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:03.661+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:03 np0005592159 podman[257524]: 2026-01-22 14:40:03.665447598 +0000 UTC m=+0.065696101 container remove 43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.674 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5ec947-4867-495b-a631-e549ef402454]: (4, ('Thu Jan 22 02:40:03 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d (43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857)\n43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857\nThu Jan 22 02:40:03 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d (43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857)\n43125dacd357b517e238cd06be25c2275d0954f87098ef055b4b9bef1e2b9857\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.677 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[415c0713-75c7-4483-a6d9-e3263edbe761]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.678 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape70febd3-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:40:03 np0005592159 kernel: tape70febd3-90: left promiscuous mode
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.719 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[85adeb91-cd02-40be-852c-2f7c61a94b02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.741 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[38e44909-5be1-413a-a450-66d6e3c906ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.744 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[23e2a22d-a72a-418b-b0e9-a4af31947d25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.773 237689 DEBUG oslo.privsep.daemon [-] privsep: reply[0710fede-4ae0-49d4-b30a-4c9d6d755edc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630663, 'reachable_time': 36962, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257541, 'error': None, 'target': 'ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:40:03 np0005592159 systemd[1]: run-netns-ovnmeta\x2de70febd3\x2d9995\x2d42cd\x2da322\x2d30bf5db3445d.mount: Deactivated successfully.
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.779 143856 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e70febd3-9995-42cd-a322-30bf5db3445d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Jan 22 09:40:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:03.780 143856 DEBUG oslo.privsep.daemon [-] privsep: reply[faa1d9c4-8fba-4ba7-8498-e29dd3cf8f67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Jan 22 09:40:03 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:03 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:04.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:04 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:04.613+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:04 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:05.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:05 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:05.586+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:05 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:06.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:06 np0005592159 ceph-osd[79779]: osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:06.615+0000 7f47f8ed4640 -1 osd.2 153 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:07 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:07 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e154 e154: 3 total, 3 up, 3 in
Jan 22 09:40:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:07.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:07 np0005592159 ceph-osd[79779]: osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:07.625+0000 7f47f8ed4640 -1 osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:08 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:08 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:08.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:08 np0005592159 ceph-osd[79779]: osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:08.668+0000 7f47f8ed4640 -1 osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:09 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:09.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:09 np0005592159 ceph-osd[79779]: osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:09.685+0000 7f47f8ed4640 -1 osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:10 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:10.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:10 np0005592159 ceph-osd[79779]: osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:10.702+0000 7f47f8ed4640 -1 osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:11.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:11 np0005592159 ceph-osd[79779]: osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:11.723+0000 7f47f8ed4640 -1 osd.2 154 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:12 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:12.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:12 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e155 e155: 3 total, 3 up, 3 in
Jan 22 09:40:12 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:12.717+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:13.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:13 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:13 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:13 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:13.727+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:14 np0005592159 podman[257601]: 2026-01-22 14:40:14.029598264 +0000 UTC m=+0.081968779 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:40:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:14 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:14.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:14 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:14.732+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:15.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:15 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:15 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:15.769+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:16 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:16.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:16 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:16.784+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:17.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:17 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:17 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:17.806+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:40:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4054496500' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:40:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:40:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4054496500' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:40:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:18.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:18 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:18 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:18.768+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:18 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:19.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:19.756+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:19 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:20 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:20.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:20.794+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:20 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:20 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:20 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:21.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:21.772+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:21 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:21 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:22.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:22.794+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:22 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:22 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:22 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:23.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:23.751+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:23 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:24 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:24.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:24.716+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:24 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:25 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:25.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:25 np0005592159 podman[257654]: 2026-01-22 14:40:25.468743642 +0000 UTC m=+0.162548741 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 22 09:40:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:25.766+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:25 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:26 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:26.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:26.779+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:26 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:26 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:26.833 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:40:26 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:26.835 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:40:27 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:27.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:27.742+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:27 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:28 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:28 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:28.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:28.739+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:28 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:29 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:29.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:29.715+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:29 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:30 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:30.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:30.704+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:30 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:31 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:31.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:31.752+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:31 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:32 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:32.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:32.721+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:32 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:33 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:33 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:33.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:33.710+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:33 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:34 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:34.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:34.699+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:34 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:35 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:35.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:35.713+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:35 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:35 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:35.836 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:40:36 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:36.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:36.721+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:36 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:37.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #127. Immutable memtables: 0.
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.646390) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 127
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837646499, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 969, "num_deletes": 256, "total_data_size": 1535676, "memory_usage": 1554616, "flush_reason": "Manual Compaction"}
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #128: started
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837660471, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 128, "file_size": 1008549, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64204, "largest_seqno": 65168, "table_properties": {"data_size": 1004293, "index_size": 1779, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10872, "raw_average_key_size": 20, "raw_value_size": 995023, "raw_average_value_size": 1842, "num_data_blocks": 77, "num_entries": 540, "num_filter_entries": 540, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092786, "oldest_key_time": 1769092786, "file_creation_time": 1769092837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 14116 microseconds, and 7758 cpu microseconds.
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.660525) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #128: 1008549 bytes OK
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.660549) [db/memtable_list.cc:519] [default] Level-0 commit table #128 started
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.662292) [db/memtable_list.cc:722] [default] Level-0 commit table #128: memtable #1 done
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.662336) EVENT_LOG_v1 {"time_micros": 1769092837662303, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.662362) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 1530677, prev total WAL file size 1530677, number of live WAL files 2.
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000124.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.663153) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373633' seq:72057594037927935, type:22 .. '6C6F676D0033303135' seq:0, type:0; will stop at (end)
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [128(984KB)], [126(11MB)]
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837663228, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [128], "files_L6": [126], "score": -1, "input_data_size": 12919491, "oldest_snapshot_seqno": -1}
Jan 22 09:40:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:37.691+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:37 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #129: 11109 keys, 12766855 bytes, temperature: kUnknown
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837773245, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 129, "file_size": 12766855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12702136, "index_size": 35870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27781, "raw_key_size": 300988, "raw_average_key_size": 27, "raw_value_size": 12509152, "raw_average_value_size": 1126, "num_data_blocks": 1353, "num_entries": 11109, "num_filter_entries": 11109, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092837, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 129, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.773700) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 12766855 bytes
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.775744) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 117.2 rd, 115.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.4 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(25.5) write-amplify(12.7) OK, records in: 11638, records dropped: 529 output_compression: NoCompression
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.775781) EVENT_LOG_v1 {"time_micros": 1769092837775764, "job": 80, "event": "compaction_finished", "compaction_time_micros": 110212, "compaction_time_cpu_micros": 49134, "output_level": 6, "num_output_files": 1, "total_output_size": 12766855, "num_input_records": 11638, "num_output_records": 11109, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837776554, "job": 80, "event": "table_file_deletion", "file_number": 128}
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000126.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092837781026, "job": 80, "event": "table_file_deletion", "file_number": 126}
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.663036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.781185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.781192) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.781194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.781196) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:37 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:40:37.781198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:40:38 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:38 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:38 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:38.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:38.692+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:38 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:39.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:39 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:39.710+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:39 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:40.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:40.698+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:40 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:41.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:41 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:41.661+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:41 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:42 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:42 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:42.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:42.644+0000 7f47f8ed4640 -1 osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:42 np0005592159 ceph-osd[79779]: osd.2 155 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:43.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e156 e156: 3 total, 3 up, 3 in
Jan 22 09:40:43 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:43 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:43.629+0000 7f47f8ed4640 -1 osd.2 156 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:43 np0005592159 ceph-osd[79779]: osd.2 156 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:44 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:44.598+0000 7f47f8ed4640 -1 osd.2 156 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:44 np0005592159 ceph-osd[79779]: osd.2 156 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:44.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:45 np0005592159 podman[257719]: 2026-01-22 14:40:45.038417295 +0000 UTC m=+0.095522796 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 09:40:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:40:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:45.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:40:45 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:45.617+0000 7f47f8ed4640 -1 osd.2 156 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:45 np0005592159 ceph-osd[79779]: osd.2 156 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:46 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e157 e157: 3 total, 3 up, 3 in
Jan 22 09:40:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:46.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:46.629+0000 7f47f8ed4640 -1 osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:46 np0005592159 ceph-osd[79779]: osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:47.217 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:40:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:47.218 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:40:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:40:47.218 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:40:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:47.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:47 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:47.634+0000 7f47f8ed4640 -1 osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:47 np0005592159 ceph-osd[79779]: osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:48 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:48 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:48.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:48 np0005592159 ceph-osd[79779]: osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:48.634+0000 7f47f8ed4640 -1 osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:49.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:49 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:49 np0005592159 ceph-osd[79779]: osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:49.681+0000 7f47f8ed4640 -1 osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:50 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:50.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:50 np0005592159 ceph-osd[79779]: osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:50.688+0000 7f47f8ed4640 -1 osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:40:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:51.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:40:51 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:51 np0005592159 ceph-osd[79779]: osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:51.705+0000 7f47f8ed4640 -1 osd.2 157 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:40:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:52.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:40:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 e158: 3 total, 3 up, 3 in
Jan 22 09:40:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:52.710+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:52 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:53.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:53 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:53 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:53.680+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:53 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:54.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:54 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:54 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:54.721+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:54 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:40:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:55.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:40:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:55.685+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:55 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:55 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:55 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:56 np0005592159 podman[257817]: 2026-01-22 14:40:56.052927298 +0000 UTC m=+0.127088742 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:40:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:56.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:56.653+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:56 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:57 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:57.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:57.611+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:57 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:40:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:40:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:40:58 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:58 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:40:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:58.582+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:58 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:40:58.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:59 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:40:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:40:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:40:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:40:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:40:59.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:40:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:40:59.587+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:59 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:40:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:00 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:00.566+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:00 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:00.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:01 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:01.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:01.608+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:01 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:02 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:02.614+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:02 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:02.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:03 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:03 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:03.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:03.636+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:03 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:41:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:41:04 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:04.604+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:04 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:04.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:05.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:05.557+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:05 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:05 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:06.579+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:06 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:06 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:06.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:07 np0005592159 ovn_controller[133156]: 2026-01-22T14:41:07Z|00076|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 22 09:41:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:41:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:07.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:41:07 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:07.612+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:07 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:08.612+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:08 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:08 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:08 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:08.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:09.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:09.570+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:09 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:09 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:10.570+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:10 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:10.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:10 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:11.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:11.616+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:11 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:11 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:11 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:12.648+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:12 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:12.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:12 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:12 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:13.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:13.660+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:13 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:13 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:14.616+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:14 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:14.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:14 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:15.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:15.650+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:15 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:15 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:16 np0005592159 podman[258061]: 2026-01-22 14:41:16.005206092 +0000 UTC m=+0.062185511 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:41:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:16.649+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:16 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:16.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:16 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:17.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:17.601+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:17 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #130. Immutable memtables: 0.
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.665299) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 130
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877665372, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 809, "num_deletes": 251, "total_data_size": 1240210, "memory_usage": 1257784, "flush_reason": "Manual Compaction"}
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #131: started
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877671522, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 131, "file_size": 597682, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65173, "largest_seqno": 65977, "table_properties": {"data_size": 594250, "index_size": 1147, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 10006, "raw_average_key_size": 21, "raw_value_size": 586609, "raw_average_value_size": 1264, "num_data_blocks": 49, "num_entries": 464, "num_filter_entries": 464, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092838, "oldest_key_time": 1769092838, "file_creation_time": 1769092877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 6251 microseconds, and 3186 cpu microseconds.
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.671574) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #131: 597682 bytes OK
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.671601) [db/memtable_list.cc:519] [default] Level-0 commit table #131 started
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673176) [db/memtable_list.cc:722] [default] Level-0 commit table #131: memtable #1 done
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673189) EVENT_LOG_v1 {"time_micros": 1769092877673185, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673211) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 1235868, prev total WAL file size 1235868, number of live WAL files 2.
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000127.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673846) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373537' seq:72057594037927935, type:22 .. '6D6772737461740032303038' seq:0, type:0; will stop at (end)
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [131(583KB)], [129(12MB)]
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877673935, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [131], "files_L6": [129], "score": -1, "input_data_size": 13364537, "oldest_snapshot_seqno": -1}
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #132: 11068 keys, 9678808 bytes, temperature: kUnknown
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877751828, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 132, "file_size": 9678808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9618647, "index_size": 31376, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27717, "raw_key_size": 300644, "raw_average_key_size": 27, "raw_value_size": 9430588, "raw_average_value_size": 852, "num_data_blocks": 1165, "num_entries": 11068, "num_filter_entries": 11068, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 132, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.752141) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 9678808 bytes
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.753986) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.4 rd, 124.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.2 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(38.6) write-amplify(16.2) OK, records in: 11573, records dropped: 505 output_compression: NoCompression
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.754003) EVENT_LOG_v1 {"time_micros": 1769092877753994, "job": 82, "event": "compaction_finished", "compaction_time_micros": 77983, "compaction_time_cpu_micros": 37712, "output_level": 6, "num_output_files": 1, "total_output_size": 9678808, "num_input_records": 11573, "num_output_records": 11068, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877754241, "job": 82, "event": "table_file_deletion", "file_number": 131}
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000129.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092877756377, "job": 82, "event": "table_file_deletion", "file_number": 129}
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.673723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.756437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.756443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.756445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.756446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:41:17.756448) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:17 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:18.593+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:18 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:18.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:18 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:19.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:19.551+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:19 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:20 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:20 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:20.578+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:20.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:21 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:21.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:21.556+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:21 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:22 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:22.578+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:22 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:22.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:23 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:23 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:23.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:23.539+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:23 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:24 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:24.492+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:24 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:24.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:25 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:25.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:25 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:25.534+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:26 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:26.521+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:26 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:26.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:27 np0005592159 podman[258139]: 2026-01-22 14:41:27.068463805 +0000 UTC m=+0.115350682 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:41:27 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:27.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:27.536+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:27 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:28 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:28 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:28.537+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:28 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:28.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:29 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:29.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:29.561+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:29 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:29 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:41:29.628 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:41:29 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:41:29.629 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:41:29 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:41:29.630 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:41:30 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:30.526+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:30 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:30.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:31 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:31.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:31.547+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:31 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:32 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:32.508+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:32 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:32.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:33 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:33 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:33.487+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:33 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:33.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:34 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:34.484+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:34 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:34.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:35 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:35.485+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:35 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:35.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:36.487+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:36 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:36 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:36.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:37.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:37.529+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:37 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 44 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:37 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:38.511+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:38 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:38.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:38 np0005592159 ceph-mon[77081]: 44 slow requests (by type [ 'delayed' : 44 ] most affected pool [ 'vms' : 36 ])
Jan 22 09:41:38 np0005592159 ceph-mon[77081]: Health check update: 44 slow ops, oldest one blocked for 3887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:39.473+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:39 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:41:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:39.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:41:39 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:39 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:40.449+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:40 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:41:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:40.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:41:40 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:41.414+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:41 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:41.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:41 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:42 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:42.444+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:42.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:42 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:42 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 3892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:43.434+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:43 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:43.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:43 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:44.474+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:44 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:44.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:44 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:45.440+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:45 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:45.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:45 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:46 np0005592159 podman[258198]: 2026-01-22 14:41:46.179084657 +0000 UTC m=+0.085041993 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 09:41:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:46.423+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:46 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:46.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:47 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:41:47.218 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:41:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:41:47.219 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:41:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:41:47.219 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:41:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:47.455+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:47 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:47.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:48 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:48 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 3897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:48.457+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:48 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:48.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:49 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:49 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:49.453+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:49.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:50 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:50 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:50.407+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:41:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:50.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:41:51 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:51 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:51.432+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:51.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:52 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:52 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:52.466+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:52.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:53 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:53 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 3902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:53 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:53.501+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:41:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:53.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:41:54 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:54 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:54.511+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:54.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:55 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:55.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:55 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:55.557+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:56 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:56 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:56.513+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:56.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:57 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:57 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:57.526+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:57.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:58 np0005592159 podman[258253]: 2026-01-22 14:41:58.084765015 +0000 UTC m=+0.132219249 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:41:58 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:58 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 3908 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:41:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:58.563+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:58 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:41:58.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:41:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:41:59 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:41:59.515+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:59 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:41:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:41:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:41:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:41:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:41:59.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:00 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:00.523+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:00 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:42:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:00.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:42:01 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:01.543+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:01 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:01.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:02.501+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:02 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:02 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:02.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:03.490+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:03 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:03.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:03 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:03 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 3913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:04.505+0000 7f47f8ed4640 -1 osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:04 np0005592159 ceph-osd[79779]: osd.2 158 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:04 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e159 e159: 3 total, 3 up, 3 in
Jan 22 09:42:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:04.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:05.550+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:05 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:05.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:05 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:05 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:42:05 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:42:05 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:42:05 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:42:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:06.515+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:06 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:06 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:06.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:07.484+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:07 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:07.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:07 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:08.514+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:08 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:08 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 5 ])
Jan 22 09:42:08 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 3918 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:08.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:09.478+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:09 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:42:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:09.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:42:09 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:10.505+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:10 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:10.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:11 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:11.465+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:11 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:11.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:12 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:12 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:42:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:42:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:12.442+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:12 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:42:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:12.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:42:13 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:13 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 3922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:13.465+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:13 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:13.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:14 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:14.509+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:14 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:14.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:15 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:15.550+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:15 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:15.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:16 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:16.503+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:16 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:16.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:17 np0005592159 podman[258521]: 2026-01-22 14:42:17.011901292 +0000 UTC m=+0.066032613 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:42:17 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:17.456+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:17 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:17.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:18 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:18 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 3928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:18.466+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:18 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 22 09:42:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:18.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 22 09:42:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:19 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:19.435+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:19 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:42:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:19.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:42:20 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:20.387+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:20 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:20.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:21.363+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:21 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:21 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:42:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:21.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:42:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:22.396+0000 7f47f8ed4640 -1 osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:22 np0005592159 ceph-osd[79779]: osd.2 159 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:22 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e160 e160: 3 total, 3 up, 3 in
Jan 22 09:42:22 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:22.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:23 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:23 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 3932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:23.440+0000 7f47f8ed4640 -1 osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:23 np0005592159 ceph-osd[79779]: osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:42:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:23.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:42:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:24.393+0000 7f47f8ed4640 -1 osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:24 np0005592159 ceph-osd[79779]: osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:24 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:24.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:25.368+0000 7f47f8ed4640 -1 osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:25 np0005592159 ceph-osd[79779]: osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:25 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:25.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:26.389+0000 7f47f8ed4640 -1 osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:26 np0005592159 ceph-osd[79779]: osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:26 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:42:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:26.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:42:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:27.420+0000 7f47f8ed4640 -1 osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:27 np0005592159 ceph-osd[79779]: osd.2 160 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:27 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:42:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:27.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:42:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 e161: 3 total, 3 up, 3 in
Jan 22 09:42:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:28.389+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:28 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 3937 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #133. Immutable memtables: 0.
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.576210) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 133
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948576270, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 1226, "num_deletes": 252, "total_data_size": 2044325, "memory_usage": 2076624, "flush_reason": "Manual Compaction"}
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #134: started
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948591645, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 134, "file_size": 1341494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65982, "largest_seqno": 67203, "table_properties": {"data_size": 1336559, "index_size": 2266, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 13171, "raw_average_key_size": 20, "raw_value_size": 1325558, "raw_average_value_size": 2097, "num_data_blocks": 98, "num_entries": 632, "num_filter_entries": 632, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092877, "oldest_key_time": 1769092877, "file_creation_time": 1769092948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 15500 microseconds, and 7856 cpu microseconds.
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.591707) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #134: 1341494 bytes OK
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.591734) [db/memtable_list.cc:519] [default] Level-0 commit table #134 started
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.593812) [db/memtable_list.cc:722] [default] Level-0 commit table #134: memtable #1 done
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.593837) EVENT_LOG_v1 {"time_micros": 1769092948593830, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.593860) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 2038299, prev total WAL file size 2038299, number of live WAL files 2.
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000130.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.595052) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [134(1310KB)], [132(9451KB)]
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948595115, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [134], "files_L6": [132], "score": -1, "input_data_size": 11020302, "oldest_snapshot_seqno": -1}
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #135: 11179 keys, 9368745 bytes, temperature: kUnknown
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948691206, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 135, "file_size": 9368745, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9308330, "index_size": 31374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 27973, "raw_key_size": 304151, "raw_average_key_size": 27, "raw_value_size": 9118696, "raw_average_value_size": 815, "num_data_blocks": 1161, "num_entries": 11179, "num_filter_entries": 11179, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769092948, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 135, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.691620) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 9368745 bytes
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.693238) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.5 rd, 97.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.2 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(15.2) write-amplify(7.0) OK, records in: 11700, records dropped: 521 output_compression: NoCompression
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.693268) EVENT_LOG_v1 {"time_micros": 1769092948693252, "job": 84, "event": "compaction_finished", "compaction_time_micros": 96249, "compaction_time_cpu_micros": 45965, "output_level": 6, "num_output_files": 1, "total_output_size": 9368745, "num_input_records": 11700, "num_output_records": 11179, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948693775, "job": 84, "event": "table_file_deletion", "file_number": 134}
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000132.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769092948696808, "job": 84, "event": "table_file_deletion", "file_number": 132}
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.594986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.696913) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.696924) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.696929) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.696933) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:42:28.696937) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:42:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:28.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:29 np0005592159 podman[258597]: 2026-01-22 14:42:29.087833714 +0000 UTC m=+0.134796128 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 09:42:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:29.419+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:29 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:29.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:30 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:30.419+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:30 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:30 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:42:30.495 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:42:30 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:42:30.501 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:42:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:30.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:31 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:31 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:42:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:31.459+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:31 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:31.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:32 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:32.496+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:32 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:32.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:33 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:33 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 3942 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:33.495+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:33 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:33 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:42:33.504 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:42:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:33.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:34 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:34.541+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:34 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:34.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:35 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:35.572+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:35 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:35.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:36 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:36.614+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:36 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:36.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:37 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:37.574+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:37 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:42:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:37.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:42:38 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:38 np0005592159 ceph-mon[77081]: Health check update: 51 slow ops, oldest one blocked for 3948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:38.571+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:38 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:38.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:39.573+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:39 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:39.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:39 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:40.576+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:40 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:40 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:40.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:41.571+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:41 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:41.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:41 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:42.621+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:42 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:42 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:42.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:43 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:43.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:43.604+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:43 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:43 np0005592159 ceph-mon[77081]: Health check update: 51 slow ops, oldest one blocked for 3953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:44.636+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:44 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:44 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:44.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:45.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:45.683+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:45 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:45 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:46.701+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:46 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:46 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:46 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:46.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:42:47.219 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:42:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:42:47.220 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:42:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:42:47.220 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:42:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:42:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:47.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:42:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:47.690+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:47 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:47 np0005592159 ceph-mon[77081]: Health check update: 51 slow ops, oldest one blocked for 3958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:47 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:48 np0005592159 podman[258683]: 2026-01-22 14:42:48.035291895 +0000 UTC m=+0.090212452 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 09:42:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:48.675+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:48 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:48 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:48.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:49.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:49.691+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:49 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:49 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:50.699+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:50 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:50.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:50 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:51.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:51.668+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:51 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:51 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:52.675+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:52 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:52.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:52 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:52 np0005592159 ceph-mon[77081]: Health check update: 51 slow ops, oldest one blocked for 3963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:53.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:53.708+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:53 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:53 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:54.662+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:54 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:54.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:54 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:42:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:55.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:42:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:55.687+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:55 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:55 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:56.701+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:56 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:56.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:56 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:42:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 09:42:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:57.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 09:42:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:57.681+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:57 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:42:57 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:42:57 np0005592159 ceph-mon[77081]: Health check update: 51 slow ops, oldest one blocked for 3968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:42:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:58.657+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:58 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:42:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:42:58.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:58 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:42:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:42:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:42:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:42:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:42:59.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:42:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:42:59.695+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:59 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:42:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:42:59 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:00 np0005592159 podman[258710]: 2026-01-22 14:43:00.095796801 +0000 UTC m=+0.133667169 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:43:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:00.731+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:00 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:00.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:00 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:43:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:01.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:43:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:01.750+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:01 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:02.769+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:02 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:02.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:03 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:03 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 3973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:03.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:03.723+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:03 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:04 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:04 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:04.690+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:04 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:04.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:05 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:43:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:05.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:43:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:05.679+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:05 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:06 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:06.680+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:06 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:06.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:07 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:07.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:07.692+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:07 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:08 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 3978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:08 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:08.648+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:08 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:08.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:09.603+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:09 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:09.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:09 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:10.641+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:10 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:10.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:10 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:43:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:11.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:43:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:11.650+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:11 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:11 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:11 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:12.654+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:12 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:12.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:13 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:13 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 3983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:13.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:13.682+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:13 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:43:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:43:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:43:14 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:14.633+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:14 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:14.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:15 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:15.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:15.669+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:15 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:16 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:16.697+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:16 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:16.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:17.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:17.706+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:17 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:17 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:43:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3087436954' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:43:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:43:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3087436954' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:43:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:18.663+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:18 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:18.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:19 np0005592159 podman[258930]: 2026-01-22 14:43:19.022482657 +0000 UTC m=+0.077615615 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 09:43:19 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 3988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:19 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:19 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:19.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:19.664+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:19 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:20 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:20.656+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:20 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:20.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:21 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:21.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:21.667+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:21 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:22 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:22 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:43:22 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:43:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:22.692+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:22 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:22.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:23 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:43:23.040 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:43:23 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:43:23.042 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:43:23 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:23 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 3993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:43:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:23.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:43:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:23.740+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:23 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:24 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:24.721+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:24 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:24.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:43:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:25.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:43:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:25.734+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:25 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:25 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:26.783+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:26 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:26 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:26.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:27 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:43:27.044 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:43:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:27.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:27.777+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:27 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:27 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:27 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 3998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:28.822+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:28 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:28.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:43:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:29.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:43:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:29.859+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:29 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:30 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:30 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:30.898+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:30 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:30.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:31 np0005592159 podman[259055]: 2026-01-22 14:43:31.044734907 +0000 UTC m=+0.106177411 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:43:31 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:31.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:31.921+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:31 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:32 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:32.933+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:32 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:43:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:32.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:43:33 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:33 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:33.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:33.982+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:33 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:34 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:34.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:34.981+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:34 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:35 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:35.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:36.008+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:36 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:36 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:36.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:37.027+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:37 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:37 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:37.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:38.016+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:38 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:38 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:38 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:38.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:39.044+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:39 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:39.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:39 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:39 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:40.054+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:40 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:40 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:40.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:41.015+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:41 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:41.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:42.014+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:42 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:42 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:42.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:43.053+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:43 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:43 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:43 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:43.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:44.087+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:44 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:44 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:44.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:45.117+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:45 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:45 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:45.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:46.162+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:46 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:46 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:46.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:47.148+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:47 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:43:47.220 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:43:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:43:47.221 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:43:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:43:47.221 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:43:47 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:47.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:48.172+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:48 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:48 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:48 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4017 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:48.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:49.199+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:49 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:49 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:49.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:50 np0005592159 podman[259141]: 2026-01-22 14:43:50.016703531 +0000 UTC m=+0.070138437 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 09:43:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:50.151+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:50 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:50 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:50.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:51.147+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:51 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:51 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:51.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:52.158+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:52 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:52 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:52.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:53.179+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:53 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:53 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:53 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4022 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:53.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:54.190+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:54 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:54 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:54.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:55.206+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:55 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:55 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:43:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:55.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:43:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:56.186+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:56 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:56 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:43:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:56.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:43:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:57.227+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:57 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:57 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:57.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:58.272+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:58 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:58 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:58 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4027 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:43:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:43:59.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:43:59.320+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:59 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:43:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:43:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:43:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:43:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:43:59.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:43:59 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:43:59 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:00.321+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:00 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:00 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:01.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:01.291+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:01 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:01.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:01 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:02 np0005592159 podman[259167]: 2026-01-22 14:44:02.09223525 +0000 UTC m=+0.141245620 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 09:44:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:02.324+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:02 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:03.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:03 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:03 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:03.352+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:03 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:03.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:04.348+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:04 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:04 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:05.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:05.299+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:05 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:05 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:44:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:05.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:44:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:06.265+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:06 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:06 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:07.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:07.290+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:07 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:07 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:07.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:08.278+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:08 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:08 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:08 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4037 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:09.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:09.268+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:09 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:09.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:09 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:10.229+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:10 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:10 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:11.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:11.258+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:11 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:11.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:11 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:12.293+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:12 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:12 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:13.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:13.330+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:13 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:13.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:13 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:13 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4042 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:14.375+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:14 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:14 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:14 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:14 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:44:14.902 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:44:14 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:44:14.904 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:44:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:15.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:15.400+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:15 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:15.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:15 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:16.370+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:16 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:17.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:17 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:17.328+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:17 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:17.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:18 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:18 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4047 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:18.350+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:18 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:19.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:19.393+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:19 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:19 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:44:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:19.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:44:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:20.424+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:20 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:20 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:20 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:44:20.907 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:44:21 np0005592159 podman[259257]: 2026-01-22 14:44:21.029194507 +0000 UTC m=+0.078707404 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 09:44:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:44:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:21.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:44:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:21.444+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:21 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:21.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:21 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:21 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:22.491+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:22 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:22 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:44:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:23.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:44:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:23.510+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:23 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:23.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:24 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4052 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:24 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:24.526+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:24 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:44:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:25.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:44:25 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:25.557+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:25 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:44:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:25.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:44:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:26.564+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:26 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:26 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:44:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:44:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:27.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:27.553+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:27 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:27.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:27 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:28.553+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:28 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:28 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:28 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:28 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:29.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:29.576+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:29 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:29.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:29 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:30.584+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:30 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:30 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:31.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:31.558+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:31 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:31.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:31 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:32.593+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:32 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:32 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:33.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:33 np0005592159 podman[259465]: 2026-01-22 14:44:33.06859329 +0000 UTC m=+0.121336202 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 09:44:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:33.634+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:33 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:33.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:33 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:44:33 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:34.621+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:34 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:34 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:35.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:35.635+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:35 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:35.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:35 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:36.586+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:36 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:37 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:37.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:37.559+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:37 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:37.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:38 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:38 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:38.529+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:38 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:39 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:39.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:39.505+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:39 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:39.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:40 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:40.528+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:40 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:41.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:41 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:41.500+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:41 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:41.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:42.495+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:42 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:42 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:43.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:43.534+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:43 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:43 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:43 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:43.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:44.504+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:44 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:44 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:45.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:45.554+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:45 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:45.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:45 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:45 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:46.544+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:46 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:46 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:47.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:44:47.221 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:44:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:44:47.222 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:44:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:44:47.222 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:44:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:47.524+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:47 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:47.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:48 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:48 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:48.564+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:48 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:49.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:49 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:49.568+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:49 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:49.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:50 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:50.557+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:50 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:51.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:51 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:51.558+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:51 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:51.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:52 np0005592159 podman[259601]: 2026-01-22 14:44:52.031958307 +0000 UTC m=+0.089706515 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 09:44:52 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:52.521+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:52 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:53.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:53 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:53 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:53.564+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:53 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:53.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:54 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:54.529+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:54 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:55.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:55.557+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:55 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:55 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:44:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:55.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:44:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:44:56.424 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:44:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:44:56.427 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:44:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:56.530+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:56 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:56 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:57.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:57.554+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:57 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:57 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:57.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:58.531+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:58 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:58 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:58 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:44:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:44:59.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:44:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:44:59.565+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:59 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:44:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:44:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:44:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:44:59.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:44:59 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:44:59 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:00 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:00.571+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:00 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:01.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:01 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:45:01.429 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:45:01 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:01.567+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:01.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:01 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:02 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:02.557+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:02 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:02 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:03.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:03 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:03.570+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:03.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:04 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:04 np0005592159 podman[259627]: 2026-01-22 14:45:04.07438879 +0000 UTC m=+0.127965398 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true)
Jan 22 09:45:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:04 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:04.524+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:05 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:05.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:05 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:05.501+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:05.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:06 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:06 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:06.481+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:07.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:07 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:07.529+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:07.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #136. Immutable memtables: 0.
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.916057) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 136
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107916095, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2312, "num_deletes": 257, "total_data_size": 4499819, "memory_usage": 4577656, "flush_reason": "Manual Compaction"}
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #137: started
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107937789, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 137, "file_size": 2933109, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 67208, "largest_seqno": 69515, "table_properties": {"data_size": 2924499, "index_size": 4911, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2629, "raw_key_size": 21758, "raw_average_key_size": 21, "raw_value_size": 2905497, "raw_average_value_size": 2807, "num_data_blocks": 213, "num_entries": 1035, "num_filter_entries": 1035, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769092948, "oldest_key_time": 1769092948, "file_creation_time": 1769093107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 21871 microseconds, and 10824 cpu microseconds.
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.937925) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #137: 2933109 bytes OK
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.937973) [db/memtable_list.cc:519] [default] Level-0 commit table #137 started
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.940004) [db/memtable_list.cc:722] [default] Level-0 commit table #137: memtable #1 done
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.940018) EVENT_LOG_v1 {"time_micros": 1769093107940012, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.940037) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 4489306, prev total WAL file size 4489306, number of live WAL files 2.
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000133.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.941367) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303134' seq:72057594037927935, type:22 .. '6C6F676D0033323637' seq:0, type:0; will stop at (end)
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [137(2864KB)], [135(9149KB)]
Jan 22 09:45:07 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093107941418, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [137], "files_L6": [135], "score": -1, "input_data_size": 12301854, "oldest_snapshot_seqno": -1}
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #138: 11687 keys, 12155168 bytes, temperature: kUnknown
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108023264, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 138, "file_size": 12155168, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12089150, "index_size": 35697, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 29253, "raw_key_size": 316696, "raw_average_key_size": 27, "raw_value_size": 11888244, "raw_average_value_size": 1017, "num_data_blocks": 1339, "num_entries": 11687, "num_filter_entries": 11687, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093107, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 138, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.023616) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 12155168 bytes
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.025290) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.4 rd, 148.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 8.9 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(8.3) write-amplify(4.1) OK, records in: 12214, records dropped: 527 output_compression: NoCompression
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.025337) EVENT_LOG_v1 {"time_micros": 1769093108025303, "job": 86, "event": "compaction_finished", "compaction_time_micros": 81777, "compaction_time_cpu_micros": 42759, "output_level": 6, "num_output_files": 1, "total_output_size": 12155168, "num_input_records": 12214, "num_output_records": 11687, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108026084, "job": 86, "event": "table_file_deletion", "file_number": 137}
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000135.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093108028109, "job": 86, "event": "table_file_deletion", "file_number": 135}
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:07.941212) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.028178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.028185) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.028191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.028194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:08.028198) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:08 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:08 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:08.552+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:09.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:09 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:09 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:09.521+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:09.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:10 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:10 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:10.556+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:45:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:11.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:45:11 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:11.507+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:11 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:11.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:12 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:12.500+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:12 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:13.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:13.509+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:13 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:45:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:13.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:45:13 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 43 ])
Jan 22 09:45:13 np0005592159 ceph-mon[77081]: Health check update: 52 slow ops, oldest one blocked for 4103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:13 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:14.479+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:14 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:14 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:15.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:15.434+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:15 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:15.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:15 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:16.441+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:16 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:17 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:17.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:17.482+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:17 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:17.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 4107 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:18.490+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:18 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #139. Immutable memtables: 0.
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.713681) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 139
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118713768, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 389, "num_deletes": 251, "total_data_size": 292936, "memory_usage": 300376, "flush_reason": "Manual Compaction"}
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #140: started
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118717671, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 140, "file_size": 192034, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69520, "largest_seqno": 69904, "table_properties": {"data_size": 189789, "index_size": 344, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 5891, "raw_average_key_size": 18, "raw_value_size": 185289, "raw_average_value_size": 595, "num_data_blocks": 15, "num_entries": 311, "num_filter_entries": 311, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093108, "oldest_key_time": 1769093108, "file_creation_time": 1769093118, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 4012 microseconds, and 1531 cpu microseconds.
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.717712) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #140: 192034 bytes OK
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.717731) [db/memtable_list.cc:519] [default] Level-0 commit table #140 started
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.719588) [db/memtable_list.cc:722] [default] Level-0 commit table #140: memtable #1 done
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.719606) EVENT_LOG_v1 {"time_micros": 1769093118719599, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.719628) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 290358, prev total WAL file size 290358, number of live WAL files 2.
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000136.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.720071) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [140(187KB)], [138(11MB)]
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118720114, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [140], "files_L6": [138], "score": -1, "input_data_size": 12347202, "oldest_snapshot_seqno": -1}
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #141: 11487 keys, 10715069 bytes, temperature: kUnknown
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118781781, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 141, "file_size": 10715069, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10651467, "index_size": 33793, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 28741, "raw_key_size": 313309, "raw_average_key_size": 27, "raw_value_size": 10455002, "raw_average_value_size": 910, "num_data_blocks": 1254, "num_entries": 11487, "num_filter_entries": 11487, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093118, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 141, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.782993) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10715069 bytes
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.784389) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 199.8 rd, 173.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.6 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(120.1) write-amplify(55.8) OK, records in: 11998, records dropped: 511 output_compression: NoCompression
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.784410) EVENT_LOG_v1 {"time_micros": 1769093118784400, "job": 88, "event": "compaction_finished", "compaction_time_micros": 61788, "compaction_time_cpu_micros": 27782, "output_level": 6, "num_output_files": 1, "total_output_size": 10715069, "num_input_records": 11998, "num_output_records": 11487, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118784575, "job": 88, "event": "table_file_deletion", "file_number": 140}
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000138.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093118787342, "job": 88, "event": "table_file_deletion", "file_number": 138}
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.719981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.787485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.787501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.787506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.787510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:18 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:45:18.787514) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:45:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:19.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:19 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:19.473+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:19 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:19.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:20 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:20.476+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:20 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:21.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:21 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:21.470+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:21 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:21.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:22 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:22.496+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:22 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:23 np0005592159 podman[259715]: 2026-01-22 14:45:23.041380851 +0000 UTC m=+0.083956703 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 09:45:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:23.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:23 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:23 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 4112 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:23.545+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:23 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:23.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:24 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:24.556+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:24 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:25.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:25 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:25.604+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:25 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:25.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:26 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:26.632+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:26 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:27.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:27 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:27.670+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:27 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:45:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:27.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:45:28 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:28 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 4118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:28.665+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:28 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:28 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:45:29 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 13K writes, 70K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1902 writes, 9925 keys, 1902 commit groups, 1.0 writes per commit group, ingest: 16.49 MB, 0.03 MB/s#012Interval WAL: 1902 writes, 1902 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     81.8      0.93              0.28        44    0.021       0      0       0.0       0.0#012  L6      1/0   10.22 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.2    137.9    118.3      3.36              1.27        43    0.078    364K    23K       0.0       0.0#012 Sum      1/0   10.22 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.2    108.0    110.4      4.29              1.55        87    0.049    364K    23K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.1    114.1    116.1      0.82              0.42        16    0.051     92K   4158       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    137.9    118.3      3.36              1.27        43    0.078    364K    23K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     82.1      0.93              0.28        43    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.074, interval 0.012#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.46 GB write, 0.11 MB/s write, 0.45 GB read, 0.11 MB/s read, 4.3 seconds#012Interval compaction: 0.09 GB write, 0.16 MB/s write, 0.09 GB read, 0.16 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 50.52 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000539 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2667,48.19 MB,15.8516%) FilterBlock(87,1018.30 KB,0.327115%) IndexBlock(87,1.34 MB,0.440181%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:45:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:45:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:29.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:45:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:29.673+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:29 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:29.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:30 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:30.702+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:30 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:31 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:31 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:31.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:31.655+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:31 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:31.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:32 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:32.644+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:32 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:33 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:33 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 4123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:33.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:33.667+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:33 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:33.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:34 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:34.662+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:34 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:35 np0005592159 podman[259923]: 2026-01-22 14:45:35.035903676 +0000 UTC m=+0.090032994 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:45:35 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:35.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:35.667+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:35 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:35.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:36 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:45:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:45:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:36.701+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:36 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:37 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:37.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:37.722+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:37 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:45:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:37.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:45:38 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:38 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 4128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:38.748+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:38 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:39 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:39.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:39.713+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:39 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:39.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:40 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:40.691+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:40 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:41.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:41 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:41.647+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:41 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:41.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:42 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:42 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:45:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:42.643+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:42 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:43.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:43 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 32 ])
Jan 22 09:45:43 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 4133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:43.608+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:43 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:43.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:44 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:44.630+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:44 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:45.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:45 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:45.678+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:45 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:45.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:46 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:46.644+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:46 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:47.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:45:47.222 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:45:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:45:47.222 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:45:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:45:47.222 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:45:47 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:47.633+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:47 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:47.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:48 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:48 np0005592159 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:48.586+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:48 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:49.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:49 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:49.596+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:49 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:49.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:50 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:50.554+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:50 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:51.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:51 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:51.552+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:51 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:45:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:51.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:45:52 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:52.590+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:52 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:45:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:53.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:45:53 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:53 np0005592159 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:53.637+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:53 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:53.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:54 np0005592159 podman[260058]: 2026-01-22 14:45:54.044492431 +0000 UTC m=+0.097272306 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:45:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:54 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:54.602+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:54 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:55.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:55 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:55.593+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:55 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:55.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:56 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:56.588+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:56 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:45:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:57.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:45:57 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:57.564+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:57 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:45:57.704 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:45:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:45:57.705 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:45:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:57.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:58 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:58 np0005592159 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:45:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:58.590+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:58 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:45:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:45:59.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:45:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:45:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:45:59.562+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 46 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:59 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 46 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:45:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 09:45:59 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 44 ])
Jan 22 09:45:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:45:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:45:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:45:59.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:00 np0005592159 ovn_controller[133156]: 2026-01-22T14:46:00Z|00077|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 22 09:46:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:00.606+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 46 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:00 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 46 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 09:46:00 np0005592159 ceph-mon[77081]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 09:46:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:01.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:01.631+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 46 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:01 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 46 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 09:46:01 np0005592159 ceph-mon[77081]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 09:46:01 np0005592159 ceph-mon[77081]: 46 slow requests (by type [ 'delayed' : 46 ] most affected pool [ 'vms' : 38 ])
Jan 22 09:46:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:01.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:02.599+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:02 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:02 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:03.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:03 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:03.555+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:03.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:03 np0005592159 ceph-mon[77081]: Health check update: 46 slow ops, oldest one blocked for 4153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:03 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:04 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:04.530+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:04 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:05.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:05 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:05.542+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:05 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:46:05.707 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:46:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:05.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:05 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:06 np0005592159 podman[260083]: 2026-01-22 14:46:06.08150313 +0000 UTC m=+0.135555129 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:46:06 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:06.589+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:07 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:07.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:07 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:07.576+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:07.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:08 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:08 np0005592159 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 4158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:08 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:08.590+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:09 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:09.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:09 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:09.604+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:46:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:09.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:10 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:10 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:10.606+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:11 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:11.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:11 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:11.642+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:11.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:12 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:12 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:12.593+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:13 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:13 np0005592159 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 4163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:13.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:13 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:13.559+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:13.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:14 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:14 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:14.593+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:15 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:15.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:15 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:15.638+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:15.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:16 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:16 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:16.663+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:17 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:17.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:17 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:17.625+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:17.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:18 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:18 np0005592159 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 4168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:46:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3904739524' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:46:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:46:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3904739524' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:46:18 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:18.619+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:19 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:19.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:19 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:19.669+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:19.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:20 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:20 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:20.696+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:21.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:21 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:21 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:21.702+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:21.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:22 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:22 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:22.685+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:23.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:23 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:23 np0005592159 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 4173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:23 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:23.686+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:23.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:24 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:24 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:24.666+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:25 np0005592159 podman[260169]: 2026-01-22 14:46:25.034124703 +0000 UTC m=+0.083560992 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:46:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:46:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:25.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:25 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:25 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:25.642+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:25.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:26 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:26 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:26.597+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:46:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:27.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:27 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:27 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:27.610+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:27.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:28 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:28 np0005592159 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 4178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:28 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:28.639+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:29.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:29 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:29 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:29.635+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:29.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:30 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:30 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:30.645+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:46:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.5 total, 600.0 interval#012Cumulative writes: 10K writes, 36K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 10K writes, 2992 syncs, 3.43 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 952 writes, 1811 keys, 952 commit groups, 1.0 writes per commit group, ingest: 0.82 MB, 0.00 MB/s#012Interval WAL: 952 writes, 454 syncs, 2.10 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:46:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:31.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:31 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:31 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:31.606+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 1 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 09:46:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:31.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:32 np0005592159 ceph-mon[77081]: 1 slow requests (by type [ 'delayed' : 1 ] most affected pool [ 'vms' : 1 ])
Jan 22 09:46:32 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:32.578+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:33.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:33 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:33.591+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:33 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:33 np0005592159 ceph-mon[77081]: Health check update: 1 slow ops, oldest one blocked for 4183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:33.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:34 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:34.567+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:34 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:35.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:35 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:35.529+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:35 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:35.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:36 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:36.550+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:36 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:37 np0005592159 podman[260246]: 2026-01-22 14:46:37.060872441 +0000 UTC m=+0.115839797 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:46:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:37.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:37 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:37.571+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:37 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:37.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:38 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:38.610+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:38 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:38 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 4188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:46:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:39.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:39 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:39.653+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:39 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:39.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:40 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:40.701+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:40 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:41.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:41 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:41.707+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:41 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:41 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:41.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:42 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:42.722+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:42 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:46:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:43.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:43.714+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:43 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:43 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 4193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:43 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:46:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:43.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:44 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:44.750+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:46:44 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:45.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:45 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:45.780+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:45.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:45 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:46 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:46.784+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:46 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:46:47.224 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:46:47.224 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:46:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:46:47.224 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:46:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:47.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:47.798+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:47 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:47.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:47 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:47 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 4198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:48.835+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:48 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:46:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:49.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:46:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:49.785+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:49 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:49.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:50.749+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:50 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:50 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:50 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:46:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:46:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:51.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:51.761+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:51 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:51 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:51.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:52.799+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:52 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:52 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:52 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:46:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:53.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:53.786+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:53 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:53 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 4203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:53 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:53.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:54.797+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:54 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:54 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:55.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:55.766+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:55 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:55 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:55.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:56 np0005592159 podman[260511]: 2026-01-22 14:46:56.015048344 +0000 UTC m=+0.064947160 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:46:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:56.764+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:56 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:56 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:57.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:57.739+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:57 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:57 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:57.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:58.737+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:58 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:58 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 4208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:46:58 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:46:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:46:59.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:46:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:46:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:46:59.777+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:59 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:46:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:46:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:46:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:46:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:46:59.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:46:59 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:47:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:00.740+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:00 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:47:00 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:47:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:01.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:01 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:01.725+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 10 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:47:01 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:47:01.874 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:47:01 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:47:01.875 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:47:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:01.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:02 np0005592159 ceph-mon[77081]: 10 slow requests (by type [ 'delayed' : 10 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:47:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:02.713+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:02 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:02 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:47:02.879 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:47:03 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:03 np0005592159 ceph-mon[77081]: Health check update: 10 slow ops, oldest one blocked for 4213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:03.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:03.697+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:03 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:03.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:04 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:04.684+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:04 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:05.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:05 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:05.636+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:05 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:05.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:06.667+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:06 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:06 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:07.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:07.669+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:07 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:07 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:07.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:08 np0005592159 podman[260537]: 2026-01-22 14:47:08.057670382 +0000 UTC m=+0.107952628 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:47:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:08.691+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:08 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:08 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:08 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:09.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:09.658+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:09 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:09 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:09.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:10.657+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:10 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:10 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:11.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:11.697+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:11 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:11 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:11.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:12.726+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:12 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:12 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:13.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:13.733+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:13 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:13 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:13 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:13 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:13.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:14.692+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:14 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:14 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:15.693+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:15 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:15.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:16 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:16.683+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:16 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:17 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:17.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:17 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:17.636+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:17.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:18 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:18 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:18.605+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:18 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:19 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:19.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:19.619+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:19 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:19.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:20 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:20.647+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:20 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:21 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:21.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:21.684+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:21 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:21.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:22 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:22.659+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:22 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:23.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:23.614+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:23 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:23 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:23 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:23.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:24.638+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:24 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:24 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:25.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:25.597+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:25 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:25 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:25.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:26.626+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:26 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:27 np0005592159 podman[260626]: 2026-01-22 14:47:27.05278509 +0000 UTC m=+0.096920360 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 09:47:27 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:27.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:27.614+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:27 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:27.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:28 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:28 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:28 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:28.594+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:28 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:29 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:29.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:29.603+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:29 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:30.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:30 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:30.569+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:30 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:31.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:31.533+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:31 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:31 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:32.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:32.540+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:32 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:32 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:33.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:33.566+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:33 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:33 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:33 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:34.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:34.541+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:34 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:34 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:35.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:35.590+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:35 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:35 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:36.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:36.609+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:36 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:36 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:36 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:37.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:37.659+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:37 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:37 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:38.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:38.676+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:38 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:39 np0005592159 podman[260701]: 2026-01-22 14:47:39.106498931 +0000 UTC m=+0.161323824 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:47:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:39.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:39 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:39 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:39.699+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:39 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:40.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:40 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:40.670+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:40 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:41.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:41 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:41.720+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:41 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:42.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:42 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:42.710+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:42 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:43.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:43 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:43 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:43.681+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:43 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:44.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:44 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:44.636+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:44 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:45.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:45 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:45.626+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:45 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:46.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:46 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:46.652+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:46 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:47:47.225 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:47:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:47:47.225 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:47:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:47:47.226 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:47:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:47.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:47 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:47.696+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:47 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:48.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:48 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:48 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:48.712+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:48 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:49.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:49 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:49.704+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:49 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:50.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:50 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:50.706+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:50 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:47:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:51.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:47:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:51 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:47:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:47:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:51.699+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:51 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:52.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:52 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:52.728+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:52 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:53.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:53 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:53.683+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:53 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:53 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:54.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:54 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:54.694+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:55 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:55.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:55 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:55.654+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 09:47:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:56.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 09:47:56 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:56 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:56 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:56.649+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:57.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:57 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:57 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:57.640+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:57 np0005592159 podman[260920]: 2026-01-22 14:47:57.997424707 +0000 UTC m=+0.058904530 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 09:47:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:47:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:47:58.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:47:58 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:58 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:47:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:47:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:58.610+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:58 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:47:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:47:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:47:59.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:47:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:47:59 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:59 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:47:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:47:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:47:59.576+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:00.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:00 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:00.572+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #142. Immutable memtables: 0.
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.878784) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 142
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280878818, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 2445, "num_deletes": 251, "total_data_size": 4756343, "memory_usage": 4825256, "flush_reason": "Manual Compaction"}
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #143: started
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280897910, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 143, "file_size": 3081552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69909, "largest_seqno": 72349, "table_properties": {"data_size": 3072496, "index_size": 5229, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23031, "raw_average_key_size": 21, "raw_value_size": 3052524, "raw_average_value_size": 2823, "num_data_blocks": 226, "num_entries": 1081, "num_filter_entries": 1081, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093119, "oldest_key_time": 1769093119, "file_creation_time": 1769093280, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 19197 microseconds, and 6903 cpu microseconds.
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.897974) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #143: 3081552 bytes OK
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.897999) [db/memtable_list.cc:519] [default] Level-0 commit table #143 started
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.900181) [db/memtable_list.cc:722] [default] Level-0 commit table #143: memtable #1 done
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.900198) EVENT_LOG_v1 {"time_micros": 1769093280900192, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.900234) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 4745301, prev total WAL file size 4745301, number of live WAL files 2.
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000139.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.901895) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [143(3009KB)], [141(10MB)]
Jan 22 09:48:00 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093280902007, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [143], "files_L6": [141], "score": -1, "input_data_size": 13796621, "oldest_snapshot_seqno": -1}
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #144: 12051 keys, 12161457 bytes, temperature: kUnknown
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093281023144, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 144, "file_size": 12161457, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12093413, "index_size": 36827, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30149, "raw_key_size": 326770, "raw_average_key_size": 27, "raw_value_size": 11886173, "raw_average_value_size": 986, "num_data_blocks": 1377, "num_entries": 12051, "num_filter_entries": 12051, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093280, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 144, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.023476) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 12161457 bytes
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.025509) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.9 rd, 100.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 10.2 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 12568, records dropped: 517 output_compression: NoCompression
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.025525) EVENT_LOG_v1 {"time_micros": 1769093281025518, "job": 90, "event": "compaction_finished", "compaction_time_micros": 121180, "compaction_time_cpu_micros": 54985, "output_level": 6, "num_output_files": 1, "total_output_size": 12161457, "num_input_records": 12568, "num_output_records": 12051, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093281026687, "job": 90, "event": "table_file_deletion", "file_number": 143}
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000141.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093281028957, "job": 90, "event": "table_file_deletion", "file_number": 141}
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:00.901732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.029131) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.029138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.029141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.029143) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:48:01.029146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:48:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:01.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:01 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:01 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:01.622+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:02.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:02 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:02.616+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:02 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:48:03.225 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:48:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:48:03.226 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:48:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:03.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:03 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:03.588+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:03 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:03 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:04.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:04.543+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:04 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:04 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:05.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:05.526+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:05 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:05 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:06.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:06.487+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:06 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:07 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:07.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:07.468+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:07 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:08 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:08 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:08 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:08.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:08 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:48:08.228 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:48:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:08.496+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:08 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:09 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:09.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:09.509+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:09 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:10.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:10 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:10 np0005592159 podman[261020]: 2026-01-22 14:48:10.079183109 +0000 UTC m=+0.155069799 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 09:48:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:10.546+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:10 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:11 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:11.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:11.580+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:11 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:12.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:12 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:12.613+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:12 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:13 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:13 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:13.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:13.622+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:13 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:14.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:14 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:14.669+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:14 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:15 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:15.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:15.701+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:15 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:16.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:16 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:16.697+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:16 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:17.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:17.674+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:17 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:17 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:18.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:48:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/818491039' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:48:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:48:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/818491039' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:48:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:18.721+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:18 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:19 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:19.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:19.762+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:19 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:20.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:20 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:20 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:20.772+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:20 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:21.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:21 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:21.800+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:21 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:22.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:22 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:22 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:22.833+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:22 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:23.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:23.825+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:23 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:24.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:24 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:24 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:24.857+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:24 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:25.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:25 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:25.902+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:25 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:26.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:26.861+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:26 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:26 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:27.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:27.899+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:27 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:27 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:27 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:28.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:28.931+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:28 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:28 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:28 np0005592159 podman[261085]: 2026-01-22 14:48:28.985786461 +0000 UTC m=+0.051150926 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:48:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:29.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:29 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:29.974+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:29 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:30.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:30.950+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:30 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:30 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:31.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:31.914+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:31 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:31 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:32.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:32.953+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:32 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:32 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:33.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:33.908+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:33 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:34.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:34 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:34 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:34.882+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:34 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:35 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:35.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:35.848+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:35 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:36.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:36 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:36.833+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:36 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:37.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:37 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:37.854+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:37 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:38.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:38.852+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:38 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:38 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:38 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:39.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:39.816+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:39 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:40.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:40 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:40 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:40 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:40.776+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:40 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:41 np0005592159 podman[261160]: 2026-01-22 14:48:41.073082098 +0000 UTC m=+0.126170439 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 22 09:48:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:41.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:41 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:41.743+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:41 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:42.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:42 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:48:42.252 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:48:42 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:48:42.254 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:48:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:42.742+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:42 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:43 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:43 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:48:43.255 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:48:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:43.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:43.735+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:43 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:44.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:44 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:44 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:44 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:44.742+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:44 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:45.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:45 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:45.708+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:45 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:46.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:46 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:46.749+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:46 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:48:47.226 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:48:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:48:47.226 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:48:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:48:47.227 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:48:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:47.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:47.716+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:47 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:48.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:48 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:48.710+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:48 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:49 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:49 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:49 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:49.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:49.664+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:49 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:50.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:50 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:50.657+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:50 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:51.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:51 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:51.678+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:51 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:52.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:52 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:52.680+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:52 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:53.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:53 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:53 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:53.670+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:53 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:54.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:54.677+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:54 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:54 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:48:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:55.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:48:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:55.669+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:55 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:55 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:48:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:56.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:48:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:56.711+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:56 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:56 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:57.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:57.714+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:57 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:57 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:48:58.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:58.709+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:58 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:59 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:59 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:48:59 np0005592159 podman[261362]: 2026-01-22 14:48:59.195370171 +0000 UTC m=+0.079333707 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 09:48:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:48:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:48:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:48:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:48:59.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:48:59 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:48:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:48:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:48:59.704+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:00.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:00 np0005592159 podman[261556]: 2026-01-22 14:49:00.127220147 +0000 UTC m=+0.073508294 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 22 09:49:00 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:49:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:00 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:49:00 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:00 np0005592159 podman[261556]: 2026-01-22 14:49:00.251894395 +0000 UTC m=+0.198182572 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 09:49:00 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:00.657+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:00 np0005592159 podman[261711]: 2026-01-22 14:49:00.927971914 +0000 UTC m=+0.051838554 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 09:49:00 np0005592159 podman[261711]: 2026-01-22 14:49:00.939706312 +0000 UTC m=+0.063572922 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 09:49:01 np0005592159 podman[261776]: 2026-01-22 14:49:01.14837932 +0000 UTC m=+0.058820248 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, version=2.2.4, build-date=2023-02-22T09:23:20, name=keepalived, vcs-type=git, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, release=1793, architecture=x86_64, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 22 09:49:01 np0005592159 podman[261776]: 2026-01-22 14:49:01.160743255 +0000 UTC m=+0.071184193 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, name=keepalived, vcs-type=git, description=keepalived for Ceph, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, distribution-scope=public, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 22 09:49:01 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:01.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:01.679+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:01 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:02.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:02 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:49:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:49:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:02.687+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:02 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:03 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:03 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:03.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:03 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:03.674+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:04.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:04 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:04 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:04.707+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:05 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:05.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:05 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:05.716+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:06.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:06 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:06 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:06.712+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:07 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:07.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:07 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:07.695+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:08.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:08 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:08 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:08 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:08.706+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:09 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:09.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:09 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:09.726+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:10.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:49:10 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:10 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:10.727+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:11.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:11 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:11 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:11.766+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:12 np0005592159 podman[262046]: 2026-01-22 14:49:12.099124987 +0000 UTC m=+0.137880946 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 22 09:49:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:12.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:12 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:12.808+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:12 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:13.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:13 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:13.769+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:13 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:13 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:14.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:14 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:14.775+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:14 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:15.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:15 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:15.809+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:15 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:16.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:16 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:16.823+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:16 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:16 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:49:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:17.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:49:17 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:17.786+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:17 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:18.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:49:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/653685768' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:49:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:49:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/653685768' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:49:18 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:18.794+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:18 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:18 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:19.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:19 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:19.821+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:19 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:20.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:20 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:20.835+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:21 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:21.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:21 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:21.839+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:22 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:22.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:22 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:22.849+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:23.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:23 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:23.881+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:24 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:24 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:24.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:24 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:49:24.416 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:49:24 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:49:24.419 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:49:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:24 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:24.871+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:25 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:25.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:25 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:25.844+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:26.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:26 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:26 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:26.811+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:27 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:27 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:27.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:27 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:27.817+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000051s ======
Jan 22 09:49:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:28.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Jan 22 09:49:28 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:28 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:28.781+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:28 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:29 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:29.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:29 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:29.744+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:30 np0005592159 podman[262082]: 2026-01-22 14:49:30.047668424 +0000 UTC m=+0.100436482 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 09:49:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:30.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:30 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:30 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:49:30.421 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:49:30 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:30.742+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:31 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:31.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:31 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:31.743+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:32.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:32 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:32 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:32.707+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 61 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:33 np0005592159 ceph-mon[77081]: 61 slow requests (by type [ 'delayed' : 61 ] most affected pool [ 'vms' : 48 ])
Jan 22 09:49:33 np0005592159 ceph-mon[77081]: Health check update: 61 slow ops, oldest one blocked for 4363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:33.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:33 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:33.695+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:34.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:34 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:34 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:34.686+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:35 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:35.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:35.706+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:35 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:36.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:36 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:36.658+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:36 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:37 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:37.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:37.610+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:37 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:38.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #145. Immutable memtables: 0.
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.166736) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 145
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378166795, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 1533, "num_deletes": 258, "total_data_size": 2844302, "memory_usage": 2887168, "flush_reason": "Manual Compaction"}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #146: started
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378181817, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 146, "file_size": 1857518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72354, "largest_seqno": 73882, "table_properties": {"data_size": 1851434, "index_size": 3094, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15427, "raw_average_key_size": 20, "raw_value_size": 1838085, "raw_average_value_size": 2460, "num_data_blocks": 134, "num_entries": 747, "num_filter_entries": 747, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093281, "oldest_key_time": 1769093281, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 15126 microseconds, and 7624 cpu microseconds.
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.181868) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #146: 1857518 bytes OK
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.181891) [db/memtable_list.cc:519] [default] Level-0 commit table #146 started
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.183392) [db/memtable_list.cc:722] [default] Level-0 commit table #146: memtable #1 done
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.183439) EVENT_LOG_v1 {"time_micros": 1769093378183429, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.183465) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 2836985, prev total WAL file size 2845729, number of live WAL files 2.
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000142.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.184525) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323636' seq:72057594037927935, type:22 .. '6C6F676D0033353230' seq:0, type:0; will stop at (end)
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [146(1813KB)], [144(11MB)]
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378184593, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [146], "files_L6": [144], "score": -1, "input_data_size": 14018975, "oldest_snapshot_seqno": -1}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #147: 12267 keys, 13864975 bytes, temperature: kUnknown
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378312908, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 147, "file_size": 13864975, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13793906, "index_size": 39276, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30725, "raw_key_size": 332842, "raw_average_key_size": 27, "raw_value_size": 13581229, "raw_average_value_size": 1107, "num_data_blocks": 1477, "num_entries": 12267, "num_filter_entries": 12267, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 147, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.313231) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 13864975 bytes
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.314244) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.2 rd, 108.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 11.6 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(15.0) write-amplify(7.5) OK, records in: 12798, records dropped: 531 output_compression: NoCompression
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.314260) EVENT_LOG_v1 {"time_micros": 1769093378314252, "job": 92, "event": "compaction_finished", "compaction_time_micros": 128411, "compaction_time_cpu_micros": 69306, "output_level": 6, "num_output_files": 1, "total_output_size": 13864975, "num_input_records": 12798, "num_output_records": 12267, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378314707, "job": 92, "event": "table_file_deletion", "file_number": 146}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000144.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378317019, "job": 92, "event": "table_file_deletion", "file_number": 144}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.184378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317139) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.317144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #148. Immutable memtables: 0.
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.318407) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 148
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378318466, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 256, "num_deletes": 250, "total_data_size": 23018, "memory_usage": 28768, "flush_reason": "Manual Compaction"}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #149: started
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378320267, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 149, "file_size": 13847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 73884, "largest_seqno": 74138, "table_properties": {"data_size": 12094, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093378, "oldest_key_time": 1769093378, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 1903 microseconds, and 756 cpu microseconds.
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.320301) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #149: 13847 bytes OK
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.320333) [db/memtable_list.cc:519] [default] Level-0 commit table #149 started
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.321722) [db/memtable_list.cc:722] [default] Level-0 commit table #149: memtable #1 done
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.321733) EVENT_LOG_v1 {"time_micros": 1769093378321729, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.321748) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 21000, prev total WAL file size 21000, number of live WAL files 2.
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000145.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.322121) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303037' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [149(13KB)], [147(13MB)]
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378322150, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [149], "files_L6": [147], "score": -1, "input_data_size": 13878822, "oldest_snapshot_seqno": -1}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #150: 12018 keys, 10006662 bytes, temperature: kUnknown
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378375568, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 150, "file_size": 10006662, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9942184, "index_size": 33325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30085, "raw_key_size": 327860, "raw_average_key_size": 27, "raw_value_size": 9738722, "raw_average_value_size": 810, "num_data_blocks": 1228, "num_entries": 12018, "num_filter_entries": 12018, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 150, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.375912) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 10006662 bytes
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.377428) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 258.9 rd, 186.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 13.2 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(1725.0) write-amplify(722.7) OK, records in: 12522, records dropped: 504 output_compression: NoCompression
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.377444) EVENT_LOG_v1 {"time_micros": 1769093378377436, "job": 94, "event": "compaction_finished", "compaction_time_micros": 53601, "compaction_time_cpu_micros": 25979, "output_level": 6, "num_output_files": 1, "total_output_size": 10006662, "num_input_records": 12522, "num_output_records": 12018, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378377881, "job": 94, "event": "table_file_deletion", "file_number": 149}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000147.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093378380696, "job": 94, "event": "table_file_deletion", "file_number": 147}
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.322046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.380896) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.380902) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.380904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.380907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:49:38.380909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:38 np0005592159 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 4368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:38.578+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:38 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:39.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:39.535+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:39 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:39 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:40.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:40.564+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:40 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:40 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:41.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:41.553+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:41 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:41 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:42.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:42.548+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:42 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:42 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:43 np0005592159 podman[262159]: 2026-01-22 14:49:43.123599407 +0000 UTC m=+0.174180274 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 09:49:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:43.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:43.580+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:43 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:43 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:43 np0005592159 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 4373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:44.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:44.620+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:44 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:44 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:45.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:45.641+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:45 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:45 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:46.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:46.629+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:46 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:46 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:49:47.226 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:49:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:49:47.227 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:49:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:49:47.227 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:49:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:47.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:47.641+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:47 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:47 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:48.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:48.686+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:48 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:48 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:48 np0005592159 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 4378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:49.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:49.707+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:49 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:49 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:49 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:50.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:50.723+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:50 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:50 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:51.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:51.687+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:51 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:51 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:52.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:52.688+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:52 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:52 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:53.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:53.658+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:53 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:53 np0005592159 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 4383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:53 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:54.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:54.667+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:54 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:54 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:55.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:55 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:55.646+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:55 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:49:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:56.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:49:56 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:56.646+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:56 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:57.654+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:57 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:58 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:49:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:49:58.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:49:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:58.663+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:58 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:59 np0005592159 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:49:59 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:49:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:49:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:49:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:49:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:49:59.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:49:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:49:59.699+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:59 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:49:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:00 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:00 np0005592159 ceph-mon[77081]: Health detail: HEALTH_WARN 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops
Jan 22 09:50:00 np0005592159 ceph-mon[77081]: [WRN] SLOW_OPS: 54 slow ops, oldest one blocked for 4388 sec, osd.2 has slow ops
Jan 22 09:50:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:00.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:00 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:00.683+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:01 np0005592159 podman[262245]: 2026-01-22 14:50:01.025891015 +0000 UTC m=+0.072323198 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 22 09:50:01 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:01.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:01 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:01.652+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:02 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:02.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:02 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:02.684+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:03 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:50:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000050s ======
Jan 22 09:50:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:03.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Jan 22 09:50:03 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:03.723+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:04 np0005592159 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 4393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:04 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:04.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:04 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:50:04.670 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:50:04 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:50:04.672 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:50:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:04.727+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:04 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:05 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:05.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:05.722+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:05 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:06 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:06.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:06.714+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:06 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:07 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:07.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:07.709+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:07 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:08.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:08 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:08 np0005592159 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:08.754+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:08 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:09 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:09.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:50:09.674 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:50:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:09.729+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:09 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:10.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:10 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:10.713+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:10 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:11 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:50:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:50:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:50:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:11.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:11.716+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:11 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:12.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:12 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:12.725+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:12 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:13 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:13 np0005592159 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:13.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:13.758+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:13 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:14 np0005592159 podman[262451]: 2026-01-22 14:50:14.047252371 +0000 UTC m=+0.106508696 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:50:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:14.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:14 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:14.712+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:14 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:15 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:15.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:15.686+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:15 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:16.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:16 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:16.666+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:16 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:17 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:17.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:17.641+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:17 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:18.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:18 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:18 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:50:18 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:50:18 np0005592159 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:18.601+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:18 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:19 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:19.571+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:19 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:19.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:20.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:20.580+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:20 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:20 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:21.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:21.623+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:21 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:21 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:22.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:22.640+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:22 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:22 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:23.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:23.595+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:23 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:23 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:23 np0005592159 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:24.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:24.636+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:24 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:24 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:25.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:25.599+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:25 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:25 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:25 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:26.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:26.577+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:26 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:26 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:27.537+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:27 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:27.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:27 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:28.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:28.548+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:28 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:28 np0005592159 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:28 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:29.526+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:29 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:50:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:29.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:50:29 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:30.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:30.527+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:30 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:30 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:31 np0005592159 podman[262563]: 2026-01-22 14:50:31.382214402 +0000 UTC m=+0.059380619 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 22 09:50:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:31.524+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:31 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:31.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:31 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:32.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:32.529+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:32 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:33 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:33.548+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:33 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:33.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:34 np0005592159 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:34 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:34.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:34.527+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:34 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:35 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:35.521+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:35 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:35.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:36 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:36.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:36.477+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:36 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:37 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:37.454+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:37 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:37.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:38 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:38.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:38.430+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:38 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:39 np0005592159 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:39 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:39.417+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:39 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:39.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000051s ======
Jan 22 09:50:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:40.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Jan 22 09:50:40 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:40.393+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:40 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:41 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:41.409+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:41 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:41.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:42.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:42 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:42.377+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:42 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:43 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:43 np0005592159 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:43.346+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:43 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:43.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:44.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:44 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:44.394+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:44 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:44 np0005592159 podman[262617]: 2026-01-22 14:50:44.535481087 +0000 UTC m=+0.106437054 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #151. Immutable memtables: 0.
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.330165) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 151
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445330193, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 1157, "num_deletes": 251, "total_data_size": 1916708, "memory_usage": 1952800, "flush_reason": "Manual Compaction"}
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #152: started
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445339657, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 152, "file_size": 1258029, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74143, "largest_seqno": 75295, "table_properties": {"data_size": 1253303, "index_size": 2121, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12396, "raw_average_key_size": 20, "raw_value_size": 1242935, "raw_average_value_size": 2068, "num_data_blocks": 92, "num_entries": 601, "num_filter_entries": 601, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093378, "oldest_key_time": 1769093378, "file_creation_time": 1769093445, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 9533 microseconds, and 3729 cpu microseconds.
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.339697) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #152: 1258029 bytes OK
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.339716) [db/memtable_list.cc:519] [default] Level-0 commit table #152 started
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341293) [db/memtable_list.cc:722] [default] Level-0 commit table #152: memtable #1 done
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341303) EVENT_LOG_v1 {"time_micros": 1769093445341300, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341332) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 1910974, prev total WAL file size 1910974, number of live WAL files 2.
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000148.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341945) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [152(1228KB)], [150(9772KB)]
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445341973, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [152], "files_L6": [150], "score": -1, "input_data_size": 11264691, "oldest_snapshot_seqno": -1}
Jan 22 09:50:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:45.364+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:45 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #153: 12104 keys, 9648597 bytes, temperature: kUnknown
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445394475, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 153, "file_size": 9648597, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9584029, "index_size": 33223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30277, "raw_key_size": 330794, "raw_average_key_size": 27, "raw_value_size": 9379455, "raw_average_value_size": 774, "num_data_blocks": 1219, "num_entries": 12104, "num_filter_entries": 12104, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093445, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 153, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.394661) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 9648597 bytes
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.395732) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 214.3 rd, 183.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 9.5 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(16.6) write-amplify(7.7) OK, records in: 12619, records dropped: 515 output_compression: NoCompression
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.395746) EVENT_LOG_v1 {"time_micros": 1769093445395739, "job": 96, "event": "compaction_finished", "compaction_time_micros": 52559, "compaction_time_cpu_micros": 24685, "output_level": 6, "num_output_files": 1, "total_output_size": 9648597, "num_input_records": 12619, "num_output_records": 12104, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445395999, "job": 96, "event": "table_file_deletion", "file_number": 152}
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000150.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093445397531, "job": 96, "event": "table_file_deletion", "file_number": 150}
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.341901) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.397596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.397601) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.397602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.397604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:50:45.397605) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:50:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:50:45.449 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:50:45 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:50:45.449 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:50:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:45.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:46.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:46 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:46.359+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:46 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:50:47.227 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:50:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:50:47.228 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:50:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:50:47.228 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:50:47 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:47.406+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:47 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:47.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:48.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:48 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:48 np0005592159 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:48.449+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:48 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:49 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:49.474+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:49 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:49.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:50.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:50 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:50.486+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:50 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:51.536+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:51 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:51.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:51 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:52.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:52.516+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:52 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:52 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:53.509+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:53 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:53.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:53 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:53 np0005592159 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:54.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:54.492+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:54 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:54 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:55.448+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:55 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:50:55.451 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:50:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:55.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:55 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:56.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:56.489+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:56 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:56 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:57.443+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:57 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:50:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:50:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:57.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:50:57 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:50:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:50:58.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:58.484+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:58 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:50:58 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:50:58 np0005592159 ceph-mon[77081]: Health check update: 62 slow ops, oldest one blocked for 4448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:50:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:50:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:50:59.531+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:59 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:50:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:50:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:50:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:50:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:50:59.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:50:59 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:00.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:00.549+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:00 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:00 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:01.510+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:01 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:01.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:01 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:02 np0005592159 podman[262704]: 2026-01-22 14:51:02.027875536 +0000 UTC m=+0.088988671 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:51:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:02.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:02.552+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:02 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:02 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:03.524+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:03 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:03 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:03 np0005592159 ceph-mon[77081]: Health check update: 32 slow ops, oldest one blocked for 4453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:04.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:04.569+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:04 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:04 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:05.545+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:05 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:05.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:05 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:05 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:06.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:06.503+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:06 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:06 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:07.454+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:07 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:07.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:07 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:07 np0005592159 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 09:51:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:08.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:08 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:08.422+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:08 np0005592159 ceph-mon[77081]: Health check update: 32 slow ops, oldest one blocked for 4458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:08 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:09 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:09.397+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:09.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:09 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:10.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:10 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:10.394+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:10 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:11 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:11.345+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:11.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:12 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:12.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:12 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:12.318+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:13 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:13 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:13.311+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:13.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:14 np0005592159 ceph-mon[77081]: Health check update: 32 slow ops, oldest one blocked for 4462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:14 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:14 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:14.262+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:14.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:15 np0005592159 podman[262782]: 2026-01-22 14:51:15.102987478 +0000 UTC m=+0.147197879 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:51:15 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:15 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:15.276+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:15.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:16 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:16 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:16.281+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:16.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:17 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:17 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:17.298+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:17.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:18 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:18 np0005592159 ceph-mon[77081]: Health check update: 32 slow ops, oldest one blocked for 4467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:18 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:18.255+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:18.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:51:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2269611559' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:51:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:51:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2269611559' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:51:19 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:19 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:19.278+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:19.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:20.244+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:20 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:20 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:51:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:51:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:51:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:20.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:21.238+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:21 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:21 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:21.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:22.275+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:22 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:22 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:22.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:23 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:23 np0005592159 ceph-mon[77081]: Health check update: 32 slow ops, oldest one blocked for 4472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:23.308+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:23 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:23.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:51:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:24.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:51:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:24.315+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:24 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:24 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:25.275+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:25 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:25 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:25.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:26.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:26.296+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:26 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 32 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:26 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:27 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:51:27.279 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:51:27 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:51:27.280 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:51:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:27.301+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:27 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:27.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:27 np0005592159 ceph-mon[77081]: 32 slow requests (by type [ 'delayed' : 32 ] most affected pool [ 'vms' : 25 ])
Jan 22 09:51:27 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:51:27 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:51:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:28.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:28.298+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:28 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:28 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:28 np0005592159 ceph-mon[77081]: Health check update: 32 slow ops, oldest one blocked for 4477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:29.285+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:29 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:51:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:29.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:51:29 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:30 np0005592159 ovn_controller[133156]: 2026-01-22T14:51:30Z|00078|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 22 09:51:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:30.268+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:30 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:30.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:30 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:30 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:31.294+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:31 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:51:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:31.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:51:31 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 49 ])
Jan 22 09:51:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:32.279+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:32 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 51 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:51:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:32.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:32 np0005592159 ceph-mon[77081]: 51 slow requests (by type [ 'delayed' : 51 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:51:33 np0005592159 podman[263048]: 2026-01-22 14:51:33.001330877 +0000 UTC m=+0.055580893 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 22 09:51:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:33.240+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:33 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:51:33 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/26803393' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:51:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:51:33 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/26803393' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:51:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:33.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:33 np0005592159 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 4482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:33 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:34.217+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:34 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:34 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:51:34.281 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:51:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:34.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:34 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:51:35 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2934091256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:51:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:51:35 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2934091256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:51:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:35.250+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:35 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:35.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:35 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:36.240+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:36 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:36.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:51:36 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3194695356' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:51:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:51:36 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3194695356' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:51:36 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:37.252+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:37 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:37.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:38 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:38.202+0000 7f47f8ed4640 -1 osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:38 np0005592159 ceph-osd[79779]: osd.2 161 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:38.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 e162: 3 total, 3 up, 3 in
Jan 22 09:51:39 np0005592159 ceph-mon[77081]: Health check update: 11 slow ops, oldest one blocked for 4487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:39 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:39.177+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:39 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:39.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:40 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:40.135+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:40 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:40.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:41.091+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:41 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:41 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:41.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:42.053+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:42 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:42 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 09:51:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:42.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 09:51:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:43.004+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:43 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:43 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 09:51:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:43.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 09:51:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:44.020+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:44 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:44 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:44 np0005592159 ceph-mon[77081]: Health check update: 11 slow ops, oldest one blocked for 4493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 09:51:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:44.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 09:51:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:45.007+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:45 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:45 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 09:51:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:45.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 09:51:46 np0005592159 podman[263072]: 2026-01-22 14:51:46.024167431 +0000 UTC m=+0.084627646 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 22 09:51:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:46.042+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:46 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 09:51:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:46.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 09:51:46 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:47.069+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:47 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:51:47.228 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:51:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:51:47.229 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:51:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:51:47.229 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:51:47 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:47.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:48.053+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:48 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:48.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:48 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:48 np0005592159 ceph-mon[77081]: Health check update: 11 slow ops, oldest one blocked for 4498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:49.068+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:49 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:49 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:49.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:50.099+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:50 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:50.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:50 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:51.147+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:51 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 09:51:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:51.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 09:51:52 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:52.158+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:52 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:52.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:53 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:53 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:53.110+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:53 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:53.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:54.064+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:54 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:54 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:54 np0005592159 ceph-mon[77081]: Health check update: 11 slow ops, oldest one blocked for 4503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 09:51:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:54.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 09:51:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:55.027+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:55 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:55 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:55.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:51:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:56.055+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:56 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 09:51:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:56.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 09:51:56 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:57.026+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:57 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 09:51:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:57.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 09:51:57 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:58.075+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:58 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 09:51:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:51:58.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 09:51:59 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:59 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:59 np0005592159 ceph-mon[77081]: Health check update: 11 slow ops, oldest one blocked for 4508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:51:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:51:59.111+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:59 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:51:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:51:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:51:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:51:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:51:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:51:59.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:00.095+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:00 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:00.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:00 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:01.071+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:01 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:01.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:02.023+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:02 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 11 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:02 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:02.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:03.059+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:03 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:03 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:03 np0005592159 ceph-mon[77081]: 11 slow requests (by type [ 'delayed' : 11 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:03.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:03 np0005592159 podman[263158]: 2026-01-22 14:52:03.992349556 +0000 UTC m=+0.048792294 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 09:52:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:04.106+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:04 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:04.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:04 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:04 np0005592159 ceph-mon[77081]: Health check update: 11 slow ops, oldest one blocked for 4513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:05.082+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:05 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:05.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:06.045+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:06 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:06 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:06.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:06.996+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:06 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:07 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:07 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 09:52:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:07.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 09:52:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:07.972+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:07 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 09:52:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:08.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 09:52:08 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:08 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:08.990+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 31 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:09 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:09.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:09.984+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:09 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:10.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:10 np0005592159 ceph-mon[77081]: 31 slow requests (by type [ 'delayed' : 31 ] most affected pool [ 'vms' : 21 ])
Jan 22 09:52:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:10.958+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:10 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:11 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:52:11.062 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:52:11 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:52:11.063 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:52:11 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 09:52:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:11.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 09:52:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:12.001+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:12 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:12.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:12 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:12 np0005592159 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 4523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:13.016+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:13 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:13 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:52:13.065 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:52:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 09:52:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:13.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 09:52:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:14.047+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:14 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 09:52:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:14.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 09:52:14 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:14 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:15.017+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:15 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:15 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.006000190s ======
Jan 22 09:52:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:15.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.006000190s
Jan 22 09:52:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:16.003+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:16 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:16.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:16 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:17.020+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:17 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:17 np0005592159 podman[263235]: 2026-01-22 14:52:17.055356838 +0000 UTC m=+0.110088550 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:52:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:17.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:18.000+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:18 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:18 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:18 np0005592159 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 4528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:18.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:52:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/531944098' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:52:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:52:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/531944098' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:52:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:18.964+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:18 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:19 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:19 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 09:52:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:19.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 09:52:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:19.968+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:19 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:20.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:20.992+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:20 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:21 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:21.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:21 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:21 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:21.956+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:21 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:22.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:22.984+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:23 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:23 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:23.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:23.965+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:23 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:24.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:24 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:24 np0005592159 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 4533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:25.006+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:25 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:25.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:26.056+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:26 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:26 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 09:52:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:26.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 09:52:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:27.024+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:27 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:27 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:27 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 39 ])
Jan 22 09:52:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:27.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:28.035+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:28 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 09:52:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:28.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 09:52:28 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 09:52:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:52:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:52:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:52:28 np0005592159 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 4538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:29.012+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:29 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:29 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:29.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:30.058+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:30 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 09:52:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:30.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 09:52:30 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:31.024+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:31 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:31.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:31.978+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:31 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:32 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:32 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:32.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:33.019+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:33 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:33 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:33.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:34.060+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:34 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000032s ======
Jan 22 09:52:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:34.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000032s
Jan 22 09:52:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:34 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:34 np0005592159 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:35 np0005592159 podman[263452]: 2026-01-22 14:52:35.00381899 +0000 UTC m=+0.067467684 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 09:52:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:35.065+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:35 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:35 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:35.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:36.055+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:36 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 09:52:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:36.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 09:52:37 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:37.066+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:37 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:37.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:38.051+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:38 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:52:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:38.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #154. Immutable memtables: 0.
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.463348) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 154
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558463369, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1704, "num_deletes": 250, "total_data_size": 3211790, "memory_usage": 3277864, "flush_reason": "Manual Compaction"}
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #155: started
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558483894, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 155, "file_size": 2099552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75300, "largest_seqno": 76999, "table_properties": {"data_size": 2092912, "index_size": 3521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16257, "raw_average_key_size": 19, "raw_value_size": 2078219, "raw_average_value_size": 2546, "num_data_blocks": 154, "num_entries": 816, "num_filter_entries": 816, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093446, "oldest_key_time": 1769093446, "file_creation_time": 1769093558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 20609 microseconds, and 4515 cpu microseconds.
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.483955) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #155: 2099552 bytes OK
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.483970) [db/memtable_list.cc:519] [default] Level-0 commit table #155 started
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.486047) [db/memtable_list.cc:722] [default] Level-0 commit table #155: memtable #1 done
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.486074) EVENT_LOG_v1 {"time_micros": 1769093558486066, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.486097) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 3203764, prev total WAL file size 3203764, number of live WAL files 2.
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000151.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.487972) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323530' seq:72057594037927935, type:22 .. '6B7600353031' seq:0, type:0; will stop at (end)
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [155(2050KB)], [153(9422KB)]
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558488050, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [155], "files_L6": [153], "score": -1, "input_data_size": 11748149, "oldest_snapshot_seqno": -1}
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #156: 12403 keys, 10657121 bytes, temperature: kUnknown
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558562695, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 156, "file_size": 10657121, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10590143, "index_size": 34865, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31045, "raw_key_size": 339520, "raw_average_key_size": 27, "raw_value_size": 10379451, "raw_average_value_size": 836, "num_data_blocks": 1270, "num_entries": 12403, "num_filter_entries": 12403, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093558, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 156, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.562924) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 10657121 bytes
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.564304) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.3 rd, 142.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.2 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(10.7) write-amplify(5.1) OK, records in: 12920, records dropped: 517 output_compression: NoCompression
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.564333) EVENT_LOG_v1 {"time_micros": 1769093558564326, "job": 98, "event": "compaction_finished", "compaction_time_micros": 74701, "compaction_time_cpu_micros": 29292, "output_level": 6, "num_output_files": 1, "total_output_size": 10657121, "num_input_records": 12920, "num_output_records": 12403, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558564746, "job": 98, "event": "table_file_deletion", "file_number": 155}
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000153.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093558566299, "job": 98, "event": "table_file_deletion", "file_number": 153}
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.487742) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.566349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.566353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.566355) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.566357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:52:38.566359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:52:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:39.091+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:39 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:39 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:39 np0005592159 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:39.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:40.076+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:40 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000031s ======
Jan 22 09:52:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:40.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 22 09:52:41 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:41.060+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:41 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:41.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:42.068+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:42 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:42 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:42 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:42.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:43.080+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:43 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:43 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:43.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:44 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:44.104+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:44 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:44 np0005592159 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:44.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:45 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:45 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:45.135+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:52:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:45.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:52:46 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:46 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:46.168+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:46.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:47 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:47 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:47.175+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:52:47.228 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:52:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:52:47.229 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:52:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:52:47.229 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:52:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:47.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:48 np0005592159 podman[263528]: 2026-01-22 14:52:48.047648 +0000 UTC m=+0.108141070 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 09:52:48 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:48.217+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:48 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:52:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:48.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:52:49 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:49 np0005592159 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:49 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:49.177+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:52:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:49.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:52:50 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:50 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:50.219+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:50.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:51 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:51.219+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:51 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:51.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:52 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:52.267+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:52 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:52.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:53 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:53.317+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:53 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:53.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:53 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:52:53.974 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:52:53 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:52:53.974 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:52:54 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:54 np0005592159 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:54.337+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:54 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:54.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:55 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:55.336+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:55 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:55.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:56 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:56.324+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:56 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:52:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:56.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:52:57 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:52:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:57.358+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:57 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:52:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:57.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:58 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:52:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:52:58.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:58.405+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:58 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:52:59 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:52:59 np0005592159 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:52:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:52:59.452+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:59 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:52:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:52:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:52:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:52:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:52:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:52:59.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:52:59 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:52:59.976 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:53:00 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:00.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:00.434+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:00 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:01 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:01.399+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:01 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:01.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:02.384+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:02 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:02 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:02.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:03 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:03 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:03.421+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:03.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:04.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:04 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:04.416+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:04 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:04 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4572 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:05 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:05.419+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:05 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:05.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:05 np0005592159 podman[263613]: 2026-01-22 14:53:05.9948085 +0000 UTC m=+0.056341326 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 09:53:06 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:06.374+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:06.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:06 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:07 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:07.341+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:07 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:07.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:08 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:08.324+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:08.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:08 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:08 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:09.275+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:09 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:09.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:09 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:09 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:10.290+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:10 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:10.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:11 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:11.248+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:11 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:11.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:12 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:12.292+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:12 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:12.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:13 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:13.267+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:13 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:13.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:14 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:14 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:14.280+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:14 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:14.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:15.236+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:15 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:15 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:15.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:16.206+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:16 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:16.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:16 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:17 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:17.249+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:17.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:18 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:18 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:18.260+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:53:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3578732624' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:53:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:53:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3578732624' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:53:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:18.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:19 np0005592159 podman[263690]: 2026-01-22 14:53:19.035095995 +0000 UTC m=+0.092391712 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 09:53:19 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:19 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:19 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:19 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:19.303+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:19.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:20 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:20 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:20.272+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:20.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:21 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:21.232+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:21 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:21.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:22 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:22.273+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:22.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:22 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:23 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:23.292+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:23 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:23 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:23.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:24 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:24.291+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:24.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:24 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:25 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:25.302+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:25 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:25.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:26 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:26.302+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:26.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:26 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:27 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:27.269+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:27 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:27.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:28 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:28.303+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:28.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:28 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:28 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:29 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:29.305+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:29.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:29 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:30 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:30.317+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:30.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:30 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:30 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:31 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:31.282+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:31.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:31 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:32 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:32.267+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:32.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:33 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:33.236+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:33 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:53:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:33.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:53:34 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:34 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:34.202+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:34 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:34.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:35 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:35.216+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:35 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:35.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:36.179+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:36 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:36 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:53:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:36.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:53:36 np0005592159 podman[263777]: 2026-01-22 14:53:36.992392654 +0000 UTC m=+0.049534275 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 09:53:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:37.142+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:37 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:37 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:37.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:38 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:38.129+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #157. Immutable memtables: 0.
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.314801) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 157
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618314830, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1004, "num_deletes": 251, "total_data_size": 1621331, "memory_usage": 1651256, "flush_reason": "Manual Compaction"}
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #158: started
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618323512, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 158, "file_size": 1064004, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77004, "largest_seqno": 78003, "table_properties": {"data_size": 1059795, "index_size": 1732, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10912, "raw_average_key_size": 20, "raw_value_size": 1050737, "raw_average_value_size": 1953, "num_data_blocks": 75, "num_entries": 538, "num_filter_entries": 538, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093559, "oldest_key_time": 1769093559, "file_creation_time": 1769093618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 8768 microseconds, and 4033 cpu microseconds.
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.323565) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #158: 1064004 bytes OK
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.323583) [db/memtable_list.cc:519] [default] Level-0 commit table #158 started
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.326528) [db/memtable_list.cc:722] [default] Level-0 commit table #158: memtable #1 done
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.326577) EVENT_LOG_v1 {"time_micros": 1769093618326567, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.326602) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1616248, prev total WAL file size 1616248, number of live WAL files 2.
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000154.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.327399) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [158(1039KB)], [156(10MB)]
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618327439, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [158], "files_L6": [156], "score": -1, "input_data_size": 11721125, "oldest_snapshot_seqno": -1}
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #159: 12430 keys, 10129553 bytes, temperature: kUnknown
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618386578, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 159, "file_size": 10129553, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10062943, "index_size": 34433, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31109, "raw_key_size": 341154, "raw_average_key_size": 27, "raw_value_size": 9852158, "raw_average_value_size": 792, "num_data_blocks": 1246, "num_entries": 12430, "num_filter_entries": 12430, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093618, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 159, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.386913) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10129553 bytes
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.389232) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 197.6 rd, 170.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(20.5) write-amplify(9.5) OK, records in: 12941, records dropped: 511 output_compression: NoCompression
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.389267) EVENT_LOG_v1 {"time_micros": 1769093618389241, "job": 100, "event": "compaction_finished", "compaction_time_micros": 59311, "compaction_time_cpu_micros": 29278, "output_level": 6, "num_output_files": 1, "total_output_size": 10129553, "num_input_records": 12941, "num_output_records": 12430, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618389909, "job": 100, "event": "table_file_deletion", "file_number": 158}
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000156.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093618392044, "job": 100, "event": "table_file_deletion", "file_number": 156}
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.327297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.392210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.392214) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.392216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.392217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:53:38.392219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:53:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:38.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:39.124+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:39 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:39 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:39 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:39 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:53:39 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:53:39 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:53:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:39.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:40 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:40.160+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:40 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:40.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:41 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:41.185+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:41 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:41.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:42 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:42.186+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:42.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:42 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:43 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:43.221+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:53:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:43.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:53:44 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:44 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:44 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:44 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:44.194+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:53:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:44.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:53:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:45 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:45 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:45.175+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:45.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:46 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:46 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:46.212+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:53:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:46.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:53:47 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:47.228+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:53:47.229 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:53:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:53:47.230 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:53:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:53:47.230 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:53:47 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:53:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:53:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:47.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:48 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:48.257+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:53:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:48.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:53:48 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:48 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:49 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:49.262+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:49 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:53:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:49.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:53:50 np0005592159 podman[263983]: 2026-01-22 14:53:50.020490437 +0000 UTC m=+0.078157606 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 09:53:50 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:53:50.104 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:53:50 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:53:50.105 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:53:50 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:50.279+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:53:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:50.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:53:50 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:51 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:51.316+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:51.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:52 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:52 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:52 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:52.344+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:52.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:53 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:53.341+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:53 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:53.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:54 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:54.345+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:54.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:54 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:54 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:55 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:55.357+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:55 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:53:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:55.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:53:56 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:56.351+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:53:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:56.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:53:56 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:57 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:57.337+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:57 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:53:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:57.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:53:58 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:58.304+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:53:58.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:53:59 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:53:59.107 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:53:59 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:53:59.259+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:53:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:59 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:53:59 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:53:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:53:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:53:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:53:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:53:59.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:00 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:00.236+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:00 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:00 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:54:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:00.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:54:01 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:01.258+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:01 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:01.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:02 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:02.233+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:54:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:02.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:54:03 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:03 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:03.225+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:54:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:03.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:54:04 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:04.210+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:04 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:04 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:04 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4632 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:04.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:05 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:05.179+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:05 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:05.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:06 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:06.215+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:54:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:06.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:54:06 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:07 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:07.178+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:07 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:07.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:08 np0005592159 podman[264069]: 2026-01-22 14:54:08.025248231 +0000 UTC m=+0.076429806 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:54:08 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:08.159+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:54:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:08.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:54:09 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:09.165+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:09 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:09 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:09.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:10 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:10.151+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:10 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:10 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:54:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:10.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:54:11 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:11.159+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:11 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:11.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:12 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:12.162+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:12.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:12 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:13 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:13.206+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #160. Immutable memtables: 0.
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.561392) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 160
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653561466, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 696, "num_deletes": 256, "total_data_size": 999630, "memory_usage": 1013048, "flush_reason": "Manual Compaction"}
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #161: started
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653570694, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 161, "file_size": 656855, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78008, "largest_seqno": 78699, "table_properties": {"data_size": 653574, "index_size": 1124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8446, "raw_average_key_size": 19, "raw_value_size": 646531, "raw_average_value_size": 1493, "num_data_blocks": 48, "num_entries": 433, "num_filter_entries": 433, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093619, "oldest_key_time": 1769093619, "file_creation_time": 1769093653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 9355 microseconds, and 4706 cpu microseconds.
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.570749) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #161: 656855 bytes OK
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.570775) [db/memtable_list.cc:519] [default] Level-0 commit table #161 started
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.572371) [db/memtable_list.cc:722] [default] Level-0 commit table #161: memtable #1 done
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.572390) EVENT_LOG_v1 {"time_micros": 1769093653572384, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.572415) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 995762, prev total WAL file size 995762, number of live WAL files 2.
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000157.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.572998) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353139' seq:72057594037927935, type:22 .. '6C6F676D0033373732' seq:0, type:0; will stop at (end)
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [161(641KB)], [159(9892KB)]
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653573032, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [161], "files_L6": [159], "score": -1, "input_data_size": 10786408, "oldest_snapshot_seqno": -1}
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #162: 12339 keys, 10642966 bytes, temperature: kUnknown
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653639752, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 162, "file_size": 10642966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10576177, "index_size": 34868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 30853, "raw_key_size": 340455, "raw_average_key_size": 27, "raw_value_size": 10366147, "raw_average_value_size": 840, "num_data_blocks": 1261, "num_entries": 12339, "num_filter_entries": 12339, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 162, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.640205) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 10642966 bytes
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.641674) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.0 rd, 158.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.7 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(32.6) write-amplify(16.2) OK, records in: 12863, records dropped: 524 output_compression: NoCompression
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.641695) EVENT_LOG_v1 {"time_micros": 1769093653641687, "job": 102, "event": "compaction_finished", "compaction_time_micros": 66998, "compaction_time_cpu_micros": 40459, "output_level": 6, "num_output_files": 1, "total_output_size": 10642966, "num_input_records": 12863, "num_output_records": 12339, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653642389, "job": 102, "event": "table_file_deletion", "file_number": 161}
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000159.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093653644750, "job": 102, "event": "table_file_deletion", "file_number": 159}
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.572947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.645017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.645027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.645030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.645032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:54:13.645035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:13 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:54:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:13.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:54:14 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:14.255+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:14.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:14 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:15 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:15.252+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:15.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:15 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:16 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:16.286+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:54:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:16.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:54:17 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:17.293+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:17 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:17 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:17.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:18 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:18.252+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:18 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:18.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:19 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:19.259+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:19 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:19 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4647 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:54:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:19.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:54:20 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:20.290+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:20.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:21 np0005592159 podman[264146]: 2026-01-22 14:54:21.07586293 +0000 UTC m=+0.129382930 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 09:54:21 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:21.290+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:21 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:54:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:21.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:54:22 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:22 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:22.333+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:22 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:54:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:22.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:54:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:23.317+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:23 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:23 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:23.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:24.348+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:24 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:54:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:24.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:54:24 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:24 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4652 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:25.377+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:25 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:25 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 09:54:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:54:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:25.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:54:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:26.416+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:26 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:54:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:26.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:54:27 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:27.391+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:27 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:27.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:28 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:28 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:28.423+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:28 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:54:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:28.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:54:29 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:29 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 4657 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:29.382+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:29 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:54:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:29.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:54:30 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:30.384+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:30 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:54:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:30.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:54:31 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:31.425+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:31 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:31.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:32 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:32.446+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:32 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:54:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:32.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:54:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:33.418+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:33 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:33 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:33.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:34.395+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:34 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:34.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:34 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:34 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:35.426+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:35 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:35 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:35.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:36.432+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:36 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:36.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:36 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:37.477+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:37 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:37 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:37.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:38.441+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:38 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:38.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:38 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:38 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:39 np0005592159 podman[264231]: 2026-01-22 14:54:38.999199637 +0000 UTC m=+0.054045116 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:54:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:39.418+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:39 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:39 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:39.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:40.395+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:40 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:54:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:40.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:54:40 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:41.415+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:41 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:41.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:42 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 28 ])
Jan 22 09:54:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:42.392+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:42 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:42.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:43 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:43 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:43.428+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:43 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:43.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:44 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:44 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:44.408+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:44 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:44.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:45.389+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:45 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:45 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:54:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:45.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:54:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:46.416+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:46 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:46 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:46.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:54:47.231 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:54:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:54:47.232 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:54:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:54:47.232 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:54:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:47.439+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:47 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:47 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:47.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:48.399+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:48 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:48.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:48 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:48 np0005592159 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:49.439+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:49 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:49 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:49 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:49 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:49 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000057s ======
Jan 22 09:54:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:49.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000057s
Jan 22 09:54:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:50.448+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:50 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:50.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:50 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:54:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:54:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:51.473+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:51 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:51.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:52 np0005592159 podman[264389]: 2026-01-22 14:54:52.068434737 +0000 UTC m=+0.131493420 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 22 09:54:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:52.516+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:52 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:52.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:52 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:53.481+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:53 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:53.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:54 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:54 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:54 np0005592159 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:54.477+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:54 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:54.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:55 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:55.485+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:55 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:55.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:56 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:56.518+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:56 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:56.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:54:57.186 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:54:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:54:57.188 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:54:57 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:57.475+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:57 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:57.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:58.429+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:58 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:54:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:54:58.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:54:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:54:58 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:59 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:54:59.191 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:54:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:54:59.474+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:54:59 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:54:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:59 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:54:59 np0005592159 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:54:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:54:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:54:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:54:59.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:55:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:00.462+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:00 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:00.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:01 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:01 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:01.421+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:01 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:55:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:01.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:55:02 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:02.455+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:02 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:55:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:02.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:55:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:03.489+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:03 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:03 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:55:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:03.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:55:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:04.505+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:04 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:55:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:04.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:55:04 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:04 np0005592159 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:05.555+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:05 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:05 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:55:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:05.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:55:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:55:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:06.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:55:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:06.584+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:06 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:06 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:07.541+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:07 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:07 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:08.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:08.516+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:08 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:55:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:08.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:55:08 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:08 np0005592159 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:09.470+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:09 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:09 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:10.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:10 np0005592159 podman[264525]: 2026-01-22 14:55:10.007735008 +0000 UTC m=+0.062761985 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 09:55:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:10.485+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:10 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 13 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:55:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:10.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:55:10 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:11.509+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:11 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:11 np0005592159 ceph-mon[77081]: 13 slow requests (by type [ 'delayed' : 13 ] most affected pool [ 'vms' : 8 ])
Jan 22 09:55:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:12.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:55:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:12.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:55:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:12.555+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:12 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:12 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:13.509+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:13 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:13 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:13 np0005592159 ceph-mon[77081]: Health check update: 13 slow ops, oldest one blocked for 4702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:14.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:14.473+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:14 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:14.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:14 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:15.503+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:15 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:15 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:16.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:16.489+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:16 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:16.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:16 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:17.471+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:17 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:17 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:55:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:18.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:55:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:18.490+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:18 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:55:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:18.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:55:18 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:18 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4707 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:19 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:19.511+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:19 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:55:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:20.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:55:20 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:20.553+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:55:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:20.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:55:20 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:21 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:21.563+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:21 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:22.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:22.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:22 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:22.585+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:22 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:23 np0005592159 podman[264602]: 2026-01-22 14:55:23.025379106 +0000 UTC m=+0.090160039 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:55:23 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:23.571+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:23 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:23 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:24.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:24.551+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:24 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:55:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:24.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:55:24 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:25.564+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:25 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:25 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:26.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:26.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:26.566+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:26 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:26 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:26 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:27.557+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:27 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:28.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:28 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:28.521+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:28 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:28.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:29 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:55:29 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 14K writes, 79K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1889 writes, 9643 keys, 1889 commit groups, 1.0 writes per commit group, ingest: 16.44 MB, 0.03 MB/s#012Interval WAL: 1889 writes, 1889 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     84.5      1.02              0.31        51    0.020       0      0       0.0       0.0#012  L6      1/0   10.15 MB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   5.5    139.6    120.3      3.91              1.54        50    0.078    453K    26K       0.0       0.0#012 Sum      1/0   10.15 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.5    110.8    112.9      4.93              1.86       101    0.049    453K    26K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.7    129.7    129.6      0.64              0.31        14    0.046     89K   3619       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.5      0.0       0.0   0.0    139.6    120.3      3.91              1.54        50    0.078    453K    26K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     84.7      1.01              0.31        50    0.020       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.084, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.54 GB write, 0.12 MB/s write, 0.53 GB read, 0.11 MB/s read, 4.9 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 57.94 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000398 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3060,55.08 MB,18.1191%) FilterBlock(101,1.23 MB,0.403088%) IndexBlock(101,1.63 MB,0.536402%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 09:55:29 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:29 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:29.516+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:29 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:30.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:30 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:30.515+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:30 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:55:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:30.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:55:31 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:31.485+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:31 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:32.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:32 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:32.530+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:32 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:32.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:33 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:33.533+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:33 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000029s ======
Jan 22 09:55:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:34.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000029s
Jan 22 09:55:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:34.533+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:34 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:34.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:34 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:34 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:35.514+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:35 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:35 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:55:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:36.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:55:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:36.489+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:36 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:36.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:37 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:37.457+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:37 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:55:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:38.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:55:38 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:38 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:38 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:38.447+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 09:55:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:38.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 09:55:39 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:39.400+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:39 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:39 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:40.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:40 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:40.395+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:40.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:40 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:40 np0005592159 podman[264689]: 2026-01-22 14:55:40.986986854 +0000 UTC m=+0.052132769 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 09:55:41 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:41.366+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:41 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:41 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:55:41.937 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:55:41 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:55:41.937 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:55:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:42.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:42 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:42.386+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:42.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:42 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:43 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:43.436+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:44 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:44 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:44 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4732 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:44.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:44 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:44.440+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:44.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:44 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:55:44.940 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:55:45 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:45 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:45.425+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:46.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:46 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:46 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:46.433+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:46.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:47 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:55:47.233 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:55:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:55:47.233 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:55:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:55:47.234 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:55:47 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:47.439+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:48.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:48 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:48 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:48.402+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:48.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:49 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:49 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4737 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:49 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:49.444+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:50.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:50 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:50 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:50.399+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:50.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:51 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:51 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:51.427+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:52.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:52 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:52.392+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:52 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:52.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:53 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:53.354+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:53 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:55:53 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2964490626' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:55:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:55:53 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2964490626' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:55:54 np0005592159 podman[264714]: 2026-01-22 14:55:54.063713236 +0000 UTC m=+0.111418115 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Jan 22 09:55:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:55:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:54.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:55:54 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:54.377+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:54 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:54 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4742 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:54.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:55.332+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:55 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:55 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:56.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:56.304+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:56 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:56 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:56.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:57 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:57.338+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:57 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:55:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:55:58.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:55:58 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:58.346+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:58 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:55:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:55:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:55:58.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:55:59 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:55:59.316+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:55:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:55:59 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:55:59 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4747 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:55:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:00.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:00 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:00.351+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:00.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:01 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:56:01 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:01.320+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:02.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:02 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:56:02 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:02 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:02.343+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:02.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:03 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:03.342+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:03 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:04.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:04 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:04.309+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:04.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:04 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:04 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4752 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:05 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:05.351+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:05 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:06.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:06 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:06.316+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:06.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:06 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:07 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:07.281+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:07 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:08.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:08 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:08.305+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:08.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:08 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:08 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:08 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:09 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:09.298+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:10 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:10.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:10 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:10.272+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:10.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:11 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:11.279+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:11 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:56:12 np0005592159 podman[264980]: 2026-01-22 14:56:12.011152579 +0000 UTC m=+0.064374787 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:56:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:12.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:12 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:12.329+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:12 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:12.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:13.338+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:13 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:13 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:14.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:14 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:14.298+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:14 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:14 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4762 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:14.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:15 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:15.326+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:15 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:16.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:16.281+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:16 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:16 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:16.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:17.235+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:17 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:17 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:18.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:18.279+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:18 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:18.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:18 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:19.309+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:19 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:20.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:20.275+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:20 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:20.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:21.315+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:21 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:22.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:22.349+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:22 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:22.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:23.319+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:23 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:23 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:23 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4767 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:23 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:24.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:24 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:56:24.140 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:56:24 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:56:24.141 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:56:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:24.313+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:24 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:24.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:24 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:24 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:24 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:24 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:25 np0005592159 podman[265057]: 2026-01-22 14:56:25.060248537 +0000 UTC m=+0.114400694 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 09:56:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:25.272+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:25 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:26.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:26.259+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:26 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:26 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:26 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:26.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:27.231+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:27 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:27 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:28.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:28.203+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:28 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:28.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:28 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:28 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4777 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:29.184+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:29 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:29 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:30 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:30.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:30.185+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:30 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:30.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:31 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:31.188+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:31 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 09:56:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.5 total, 600.0 interval#012Cumulative writes: 11K writes, 38K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 11K writes, 3411 syncs, 3.26 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 875 writes, 1364 keys, 875 commit groups, 1.0 writes per commit group, ingest: 0.64 MB, 0.00 MB/s#012Interval WAL: 875 writes, 419 syncs, 2.09 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 09:56:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:32.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:32.181+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:32 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #163. Immutable memtables: 0.
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.397699) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 163
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792397752, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 2030, "num_deletes": 251, "total_data_size": 3988875, "memory_usage": 4042784, "flush_reason": "Manual Compaction"}
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #164: started
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792414756, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 164, "file_size": 2598388, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 78704, "largest_seqno": 80729, "table_properties": {"data_size": 2590610, "index_size": 4335, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19959, "raw_average_key_size": 21, "raw_value_size": 2573537, "raw_average_value_size": 2749, "num_data_blocks": 186, "num_entries": 936, "num_filter_entries": 936, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093653, "oldest_key_time": 1769093653, "file_creation_time": 1769093792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 17109 microseconds, and 5868 cpu microseconds.
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.414812) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #164: 2598388 bytes OK
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.414836) [db/memtable_list.cc:519] [default] Level-0 commit table #164 started
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.416367) [db/memtable_list.cc:722] [default] Level-0 commit table #164: memtable #1 done
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.416387) EVENT_LOG_v1 {"time_micros": 1769093792416380, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.416407) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 3979490, prev total WAL file size 3979490, number of live WAL files 2.
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000160.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.418062) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [164(2537KB)], [162(10MB)]
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792418112, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [164], "files_L6": [162], "score": -1, "input_data_size": 13241354, "oldest_snapshot_seqno": -1}
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #165: 12758 keys, 11618320 bytes, temperature: kUnknown
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792479450, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 165, "file_size": 11618320, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11548234, "index_size": 37077, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 31941, "raw_key_size": 350844, "raw_average_key_size": 27, "raw_value_size": 11330398, "raw_average_value_size": 888, "num_data_blocks": 1348, "num_entries": 12758, "num_filter_entries": 12758, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093792, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 165, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.479699) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 11618320 bytes
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.480895) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 215.6 rd, 189.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 10.1 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(9.6) write-amplify(4.5) OK, records in: 13275, records dropped: 517 output_compression: NoCompression
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.480911) EVENT_LOG_v1 {"time_micros": 1769093792480903, "job": 104, "event": "compaction_finished", "compaction_time_micros": 61418, "compaction_time_cpu_micros": 27182, "output_level": 6, "num_output_files": 1, "total_output_size": 11618320, "num_input_records": 13275, "num_output_records": 12758, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792481539, "job": 104, "event": "table_file_deletion", "file_number": 164}
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000162.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093792483507, "job": 104, "event": "table_file_deletion", "file_number": 162}
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.417963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.483577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.483581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.483583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.483585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:56:32.483587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:56:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:32.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:33.221+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:33 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:33 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:34.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:34 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:56:34.144 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:56:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:34.204+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:34 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:34.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:34 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:34 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4783 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:35.164+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:35 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:35 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:36.124+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:36 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:36.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:36.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:36 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:37.155+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:37 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:38.139+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:38 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:38.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:38 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:38.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:39.103+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:39 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:39 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 09:56:39 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:39 np0005592159 ceph-mon[77081]: Health check update: 83 slow ops, oldest one blocked for 4788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:40.068+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:40 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:40.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:40 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:40.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:41.048+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:41 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:41 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:42.021+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:42 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:42.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:42 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:42.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:43.037+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:43 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:43 np0005592159 podman[265144]: 2026-01-22 14:56:43.061538977 +0000 UTC m=+0.098164347 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 09:56:43 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:44.069+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:44 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:44.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:44.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:45.021+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:45 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:45 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:45 np0005592159 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 4793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:45.979+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:45 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:46.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:46 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:46 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:46.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:47.023+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:47 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:56:47.234 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:56:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:56:47.234 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:56:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:56:47.235 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:56:47 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:48.063+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:48 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:48.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:48.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:49 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:49.078+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:49 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:50.042+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:50 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:50.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:50 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 09:56:50 np0005592159 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 4798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:50 np0005592159 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:50.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:51.025+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:51 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:51 np0005592159 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:52.014+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:52 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:52.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:56:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:52.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:56:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:53.047+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:53 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:53 np0005592159 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:53 np0005592159 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:54.098+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:54 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:54.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:54 np0005592159 ceph-mon[77081]: Health check update: 47 slow ops, oldest one blocked for 4803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:56:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:56:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:54.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:55.052+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:55 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:55 np0005592159 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:55 np0005592159 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:56.040+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:56 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:56 np0005592159 podman[265219]: 2026-01-22 14:56:56.061173012 +0000 UTC m=+0.116364126 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:56:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:56.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:56 np0005592159 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:56:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:56.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:56:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:57.001+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:57 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:57 np0005592159 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:58.042+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:58 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:56:58.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:56:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:56:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:56:58.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:56:58 np0005592159 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:56:59.015+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:59 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:56:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:56:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:00.056+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:00 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:57:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:00.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:00 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 22 09:57:00 np0005592159 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:57:00 np0005592159 ceph-mon[77081]: Health check update: 47 slow ops, oldest one blocked for 4808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:00.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:00 np0005592159 radosgw[80769]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 22 09:57:01 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:01.025+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 47 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:57:01 np0005592159 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:57:01 np0005592159 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:57:02 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:02.024+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:02.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:02 np0005592159 ceph-mon[77081]: 47 slow requests (by type [ 'delayed' : 47 ] most affected pool [ 'vms' : 31 ])
Jan 22 09:57:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:02.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:03.019+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:03 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:03 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:03.990+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:03 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:04.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:04.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:04 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:04 np0005592159 ceph-mon[77081]: Health check update: 47 slow ops, oldest one blocked for 4813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:04.959+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:04 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:05 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:05 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:06.004+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:06 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:06.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:06.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:06.970+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:06 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:07 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:07.933+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:07 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:08.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:08.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:08 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:08 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:08.971+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:08 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:09 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:09.976+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:09 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:10.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:10.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:10.944+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:10 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:11 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:11.967+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:11 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:12.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:12 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:12 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:12 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:12.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:12.961+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:12 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:57:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:57:13 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:13.971+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:13 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:14 np0005592159 podman[265385]: 2026-01-22 14:57:14.021614846 +0000 UTC m=+0.073488116 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 09:57:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:14.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:14 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:14.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:14 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:14.950+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:15 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:15 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:15 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:15.944+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:16.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:16.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:16 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:16 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:16.923+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:17 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:17.896+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:17 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:18.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:57:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3121868371' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:57:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:57:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3121868371' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:57:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:18.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:18 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:18 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:18.920+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:19 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:19.871+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:19 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:19 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4828 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:57:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:20.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:20.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:20 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:20.844+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:20 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:21 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:21.834+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:21 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:22.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:22.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:22 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:22.815+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:23 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:23 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:23 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:23.793+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:24 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:24 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:24.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:24.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:24 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:24.761+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:25 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:25 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:25.787+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:26 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:26.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:26 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:57:26.363 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:57:26 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:57:26.364 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:57:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:26.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:26 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:26.759+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:27 np0005592159 podman[265511]: 2026-01-22 14:57:27.035892245 +0000 UTC m=+0.089868538 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Jan 22 09:57:27 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:27 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:27.766+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:28 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:28.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:28.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:28 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:28.729+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:29 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:57:29.365 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:57:29 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:29 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:29.695+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:29 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:30.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:30 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:30.669+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:30 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:30.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:31 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 09:57:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:31.718+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:31 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:32.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:32 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:32.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:32.721+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:32 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:33 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:33 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:33.688+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:34.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:34 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:34 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 4843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:34 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:34.648+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:34.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:35 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:35 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:35.666+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:36.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:36 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:36.653+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:36.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:36 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:37 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:37.666+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:37 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:38.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:38.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:38 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:38.712+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:38 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:39.758+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:39 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:39 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:39 np0005592159 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4848 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:40.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:40.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:40.799+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:40 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:40 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:41.776+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:41 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:41 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:42.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:42.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:42.809+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:42 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:42 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:43.810+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:43 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:43 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:44.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:44.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:44.835+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:44 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:44 np0005592159 podman[265598]: 2026-01-22 14:57:44.991601386 +0000 UTC m=+0.057175447 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 22 09:57:45 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:45 np0005592159 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4853 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:45.882+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:45 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:46 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:46 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:46.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:46.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:46.839+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:46 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:47 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:57:47.235 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:57:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:57:47.236 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:57:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:57:47.236 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:57:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:47.819+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:47 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:48.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:48 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:48.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:48.825+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:48 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:49 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:49 np0005592159 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4858 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:49.871+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:49 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:50.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:50 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:50.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:50.830+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:50 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:51 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:51.850+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:51 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:52.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:52.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:52.879+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:52 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:52 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:53 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:53.906+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:53 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 09:57:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:54.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 09:57:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:54.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:54.888+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:54 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:54 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:54 np0005592159 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4863 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:57:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:55.843+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:55 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:55 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:56.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:57:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:56.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:57:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:56.824+0000 7f47f8ed4640 -1 osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:56 np0005592159 ceph-osd[79779]: osd.2 162 get_health_metrics reporting 53 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:56 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e163 e163: 3 total, 3 up, 3 in
Jan 22 09:57:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:57.832+0000 7f47f8ed4640 -1 osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:57 np0005592159 ceph-osd[79779]: osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:57:57 np0005592159 ceph-mon[77081]: 53 slow requests (by type [ 'delayed' : 53 ] most affected pool [ 'vms' : 34 ])
Jan 22 09:57:58 np0005592159 podman[265673]: 2026-01-22 14:57:58.070132457 +0000 UTC m=+0.126612635 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 09:57:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:57:58.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:57:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:57:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:57:58.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:57:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:58.800+0000 7f47f8ed4640 -1 osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:58 np0005592159 ceph-osd[79779]: osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #166. Immutable memtables: 0.
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.022892) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 166
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879022960, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1375, "num_deletes": 250, "total_data_size": 2526993, "memory_usage": 2568984, "flush_reason": "Manual Compaction"}
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #167: started
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879034291, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 167, "file_size": 1060655, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80734, "largest_seqno": 82104, "table_properties": {"data_size": 1056089, "index_size": 1833, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 14035, "raw_average_key_size": 21, "raw_value_size": 1045330, "raw_average_value_size": 1615, "num_data_blocks": 80, "num_entries": 647, "num_filter_entries": 647, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093793, "oldest_key_time": 1769093793, "file_creation_time": 1769093879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 11494 microseconds, and 6219 cpu microseconds.
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.034388) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #167: 1060655 bytes OK
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.034411) [db/memtable_list.cc:519] [default] Level-0 commit table #167 started
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.036386) [db/memtable_list.cc:722] [default] Level-0 commit table #167: memtable #1 done
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.036412) EVENT_LOG_v1 {"time_micros": 1769093879036403, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.036436) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2520358, prev total WAL file size 2520358, number of live WAL files 2.
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000163.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.037726) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353038' seq:0, type:0; will stop at (end)
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [167(1035KB)], [165(11MB)]
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879037814, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [167], "files_L6": [165], "score": -1, "input_data_size": 12678975, "oldest_snapshot_seqno": -1}
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #168: 12926 keys, 9372578 bytes, temperature: kUnknown
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879269405, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 168, "file_size": 9372578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9305328, "index_size": 33857, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32325, "raw_key_size": 355103, "raw_average_key_size": 27, "raw_value_size": 9088417, "raw_average_value_size": 703, "num_data_blocks": 1214, "num_entries": 12926, "num_filter_entries": 12926, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093879, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 168, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.269803) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 9372578 bytes
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.323663) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 54.7 rd, 40.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.1 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(20.8) write-amplify(8.8) OK, records in: 13405, records dropped: 479 output_compression: NoCompression
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.323696) EVENT_LOG_v1 {"time_micros": 1769093879323681, "job": 106, "event": "compaction_finished", "compaction_time_micros": 231720, "compaction_time_cpu_micros": 56240, "output_level": 6, "num_output_files": 1, "total_output_size": 9372578, "num_input_records": 13405, "num_output_records": 12926, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879324379, "job": 106, "event": "table_file_deletion", "file_number": 167}
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000165.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093879328661, "job": 106, "event": "table_file_deletion", "file_number": 165}
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.037594) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.328755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.328761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.328763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.328765) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:57:59.328767) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:57:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:57:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:57:59.841+0000 7f47f8ed4640 -1 osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:59 np0005592159 ceph-osd[79779]: osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:57:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:00 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:00 np0005592159 ceph-mon[77081]: Health check update: 53 slow ops, oldest one blocked for 4868 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:00 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:00.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:58:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:00.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:58:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:00.847+0000 7f47f8ed4640 -1 osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:00 np0005592159 ceph-osd[79779]: osd.2 163 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e164 e164: 3 total, 3 up, 3 in
Jan 22 09:58:01 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:01.812+0000 7f47f8ed4640 -1 osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:01 np0005592159 ceph-osd[79779]: osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:02 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:58:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:02.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:58:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:02.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:02.776+0000 7f47f8ed4640 -1 osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:02 np0005592159 ceph-osd[79779]: osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:03 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:03.750+0000 7f47f8ed4640 -1 osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:03 np0005592159 ceph-osd[79779]: osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:04 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:04 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4873 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:04.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:04.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:04.799+0000 7f47f8ed4640 -1 osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:04 np0005592159 ceph-osd[79779]: osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:05 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:05.783+0000 7f47f8ed4640 -1 osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:05 np0005592159 ceph-osd[79779]: osd.2 164 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:06 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:06.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e165 e165: 3 total, 3 up, 3 in
Jan 22 09:58:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:06.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:06.808+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:06 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:07 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:07.767+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:07 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:08.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:08.730+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:08 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:08.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:08 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:09 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:09 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4878 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:09.770+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:09 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:10.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:10.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:10.790+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:10 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:10 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:11 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:11.811+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:11 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:12.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:12.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:12.780+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:12 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:12 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:13.762+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:13 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:13 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:14.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:14.724+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:14 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:14.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:14 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:14 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4883 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:15 np0005592159 podman[265732]: 2026-01-22 14:58:15.65772435 +0000 UTC m=+0.077967683 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Jan 22 09:58:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:15.689+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:15 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:15 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:16.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:16.660+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:16 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:16.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:16 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:17.613+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:17 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:17 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:18.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:18.602+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:18 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:18.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:18 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:19.624+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:19 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:19 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:19 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4888 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:20.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:20.636+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:20 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:20.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:20 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:20 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:21.596+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:21 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:21 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:58:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:58:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:22.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:22.567+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:22 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:22.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:22 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:23.580+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:23 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:24.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:24 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:24.613+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:24 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:24.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:25 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:25 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4893 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:25 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:25.647+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:25 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:26.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:26 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:26.639+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:26 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:26.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:27 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:27.613+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:27 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:28.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:28 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:28.608+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:28 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:28.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:29 np0005592159 podman[265913]: 2026-01-22 14:58:29.132252784 +0000 UTC m=+0.169097727 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 22 09:58:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:29.613+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:29 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:29 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:29 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:58:29 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4898 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:58:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:30.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:58:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:30.657+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:30 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:30.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:31 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:31 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:31 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:58:31.487 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:58:31 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:58:31.490 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:58:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:31.621+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:31 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:32.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:32 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:58:32.493 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:58:32 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:32.608+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:32 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:32.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:33 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:33.587+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:33 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:34.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:34.555+0000 7f47f8ed4640 -1 osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:34 np0005592159 ceph-osd[79779]: osd.2 165 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:34 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:34 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e166 e166: 3 total, 3 up, 3 in
Jan 22 09:58:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:58:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:34.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:58:35 np0005592159 ceph-osd[79779]: osd.2 166 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:35.514+0000 7f47f8ed4640 -1 osd.2 166 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:35 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:36.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:36 np0005592159 ceph-osd[79779]: osd.2 166 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:36.502+0000 7f47f8ed4640 -1 osd.2 166 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e167 e167: 3 total, 3 up, 3 in
Jan 22 09:58:36 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:36.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:37 np0005592159 ceph-osd[79779]: osd.2 167 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:37.512+0000 7f47f8ed4640 -1 osd.2 167 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:37 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:37 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e168 e168: 3 total, 3 up, 3 in
Jan 22 09:58:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:58:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:38.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:58:38 np0005592159 ceph-osd[79779]: osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:38.540+0000 7f47f8ed4640 -1 osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:38.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:38 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:39 np0005592159 ceph-osd[79779]: osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:39.550+0000 7f47f8ed4640 -1 osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:39 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:39 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4908 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:40.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:40 np0005592159 ceph-osd[79779]: osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:40.511+0000 7f47f8ed4640 -1 osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:40 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:40.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:41.489+0000 7f47f8ed4640 -1 osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:41 np0005592159 ceph-osd[79779]: osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:41 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:42.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:42.468+0000 7f47f8ed4640 -1 osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:42 np0005592159 ceph-osd[79779]: osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:42.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:42 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:43.420+0000 7f47f8ed4640 -1 osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:43 np0005592159 ceph-osd[79779]: osd.2 168 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:43 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 e169: 3 total, 3 up, 3 in
Jan 22 09:58:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:44.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:44.447+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:44 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:44.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:45 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:45 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4913 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:45.476+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:45 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:46 np0005592159 podman[266048]: 2026-01-22 14:58:46.027888286 +0000 UTC m=+0.075672835 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 09:58:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:46.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:46.470+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:46 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:46 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:46 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:58:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:46.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:58:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:58:47.237 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:58:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:58:47.238 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:58:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:58:47.238 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:58:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:47.502+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:47 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:47 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:48.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:48.461+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:48 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:48 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:58:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:48.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:58:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:49.494+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:49 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:49 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:49 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:58:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:50.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:58:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:50.531+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:50 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:50 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:50.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:51.524+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:51 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:51 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:58:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:52.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:58:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:52.518+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:52 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:58:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:52.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:58:52 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:53.525+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:53 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:53 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #169. Immutable memtables: 0.
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.047935) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 169
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934047977, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1060, "num_deletes": 259, "total_data_size": 1747541, "memory_usage": 1778784, "flush_reason": "Manual Compaction"}
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #170: started
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934062852, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 170, "file_size": 1147709, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82109, "largest_seqno": 83164, "table_properties": {"data_size": 1143055, "index_size": 2113, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11768, "raw_average_key_size": 20, "raw_value_size": 1132988, "raw_average_value_size": 1970, "num_data_blocks": 90, "num_entries": 575, "num_filter_entries": 575, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093880, "oldest_key_time": 1769093880, "file_creation_time": 1769093934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 15703 microseconds, and 8262 cpu microseconds.
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.063628) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #170: 1147709 bytes OK
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.063676) [db/memtable_list.cc:519] [default] Level-0 commit table #170 started
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.065584) [db/memtable_list.cc:722] [default] Level-0 commit table #170: memtable #1 done
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.065622) EVENT_LOG_v1 {"time_micros": 1769093934065610, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.065648) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 1742142, prev total WAL file size 1742142, number of live WAL files 2.
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000166.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.066905) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373731' seq:72057594037927935, type:22 .. '6C6F676D0034303233' seq:0, type:0; will stop at (end)
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [170(1120KB)], [168(9152KB)]
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934066955, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [170], "files_L6": [168], "score": -1, "input_data_size": 10520287, "oldest_snapshot_seqno": -1}
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #171: 12966 keys, 10366360 bytes, temperature: kUnknown
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934154035, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 171, "file_size": 10366360, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10297542, "index_size": 35297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32453, "raw_key_size": 357288, "raw_average_key_size": 27, "raw_value_size": 10078662, "raw_average_value_size": 777, "num_data_blocks": 1270, "num_entries": 12966, "num_filter_entries": 12966, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 171, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.154418) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10366360 bytes
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.156195) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 120.7 rd, 118.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 8.9 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(18.2) write-amplify(9.0) OK, records in: 13501, records dropped: 535 output_compression: NoCompression
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.156219) EVENT_LOG_v1 {"time_micros": 1769093934156209, "job": 108, "event": "compaction_finished", "compaction_time_micros": 87166, "compaction_time_cpu_micros": 39199, "output_level": 6, "num_output_files": 1, "total_output_size": 10366360, "num_input_records": 13501, "num_output_records": 12966, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934156652, "job": 108, "event": "table_file_deletion", "file_number": 170}
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000168.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093934158982, "job": 108, "event": "table_file_deletion", "file_number": 168}
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.066740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.159071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.159078) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.159080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.159082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:58:54.159084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:58:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:54.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:54.526+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:54 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:58:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:54.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:54 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4923 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:58:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:55.536+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:55 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:55 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:56.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:56.573+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:56 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:56.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:56 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:57.577+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:57 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:57 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:58:58.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:58.575+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:58 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:58:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:58:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:58:58.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:58:58 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:58:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:58:59.564+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:59 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:58:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:59 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:58:59 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4928 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:00 np0005592159 podman[266124]: 2026-01-22 14:59:00.033351057 +0000 UTC m=+0.083250177 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 09:59:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:00.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:00.520+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:00 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:00.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:00 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:01.532+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:01 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:01 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:02.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:02.573+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:02 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:02.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:02 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:03.524+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:03 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:04 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:04 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:04.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:04.517+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:04 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:04.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:05 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:05 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:05.492+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:05 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:06 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:06.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:06.448+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:06 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:06.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:07 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:07.484+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:07 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:08 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:08.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:08.524+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:08 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:08.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:09 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:09 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:09.488+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:09 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:10 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:10.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:10.486+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:10 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:10.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:11 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:11.462+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:11 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:12 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:12.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:12.464+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:12 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:12.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:13 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:13.459+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:13 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:14 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:14 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:14.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:14.445+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:14 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:59:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:14.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:59:15 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:15.488+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:15 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:16 np0005592159 podman[266184]: 2026-01-22 14:59:16.24622336 +0000 UTC m=+0.097633621 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 22 09:59:16 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:16.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:16.523+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:16 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:16.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #172. Immutable memtables: 0.
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.315124) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 172
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957315162, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 556, "num_deletes": 251, "total_data_size": 660256, "memory_usage": 671352, "flush_reason": "Manual Compaction"}
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #173: started
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957319743, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 173, "file_size": 432980, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83170, "largest_seqno": 83720, "table_properties": {"data_size": 430246, "index_size": 705, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7232, "raw_average_key_size": 19, "raw_value_size": 424544, "raw_average_value_size": 1141, "num_data_blocks": 31, "num_entries": 372, "num_filter_entries": 372, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093934, "oldest_key_time": 1769093934, "file_creation_time": 1769093957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 4670 microseconds, and 1572 cpu microseconds.
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.319797) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #173: 432980 bytes OK
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.319908) [db/memtable_list.cc:519] [default] Level-0 commit table #173 started
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.321294) [db/memtable_list.cc:722] [default] Level-0 commit table #173: memtable #1 done
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.321324) EVENT_LOG_v1 {"time_micros": 1769093957321303, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.321339) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 657001, prev total WAL file size 657001, number of live WAL files 2.
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000169.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.321762) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [173(422KB)], [171(10123KB)]
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957321826, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [173], "files_L6": [171], "score": -1, "input_data_size": 10799340, "oldest_snapshot_seqno": -1}
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #174: 12827 keys, 9181524 bytes, temperature: kUnknown
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957385551, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 174, "file_size": 9181524, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9114522, "index_size": 33801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 32133, "raw_key_size": 355242, "raw_average_key_size": 27, "raw_value_size": 8898614, "raw_average_value_size": 693, "num_data_blocks": 1203, "num_entries": 12827, "num_filter_entries": 12827, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769093957, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 174, "seqno_to_time_mapping": "N/A"}}
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.385842) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9181524 bytes
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.387407) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.3 rd, 143.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.9 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(46.1) write-amplify(21.2) OK, records in: 13338, records dropped: 511 output_compression: NoCompression
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.387452) EVENT_LOG_v1 {"time_micros": 1769093957387435, "job": 110, "event": "compaction_finished", "compaction_time_micros": 63797, "compaction_time_cpu_micros": 23563, "output_level": 6, "num_output_files": 1, "total_output_size": 9181524, "num_input_records": 13338, "num_output_records": 12827, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957387752, "job": 110, "event": "table_file_deletion", "file_number": 173}
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000171.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769093957390393, "job": 110, "event": "table_file_deletion", "file_number": 171}
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.321661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.390463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.390467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.390469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.390470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-14:59:17.390472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 09:59:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:17.521+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:17 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:18 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:18.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 09:59:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4281868125' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 09:59:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 09:59:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4281868125' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 09:59:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:18.501+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:18 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:18.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:19 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:19 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4947 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:19.535+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:19 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:20.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:20 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:20.535+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:20 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:20.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:21.514+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:21 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:21 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:22.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:22 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:22 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:22.562+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:22.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:23 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:23.567+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:24.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:24 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:24.605+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:24 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:24 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:24 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:24.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:25 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:25.627+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:25 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:26.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:26 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:26.656+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:26 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:26.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:27 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:27.694+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:27 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:28.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:28 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:28.655+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:28.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:29.617+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:29 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:30 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:30 np0005592159 podman[266258]: 2026-01-22 14:59:30.222813974 +0000 UTC m=+0.093282110 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 09:59:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:30.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:30.572+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:30 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:30.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:31 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:31 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:31 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:31 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:31 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:31 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:31 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 09:59:31 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 09:59:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:31.578+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:31 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:31 np0005592159 podman[266554]: 2026-01-22 14:59:31.617715343 +0000 UTC m=+0.068555155 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Jan 22 09:59:31 np0005592159 podman[266554]: 2026-01-22 14:59:31.707822662 +0000 UTC m=+0.158662474 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 09:59:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:32.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:32 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:59:32.547 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 09:59:32 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:59:32.548 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 09:59:32 np0005592159 podman[266705]: 2026-01-22 14:59:32.586015703 +0000 UTC m=+0.082530418 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 09:59:32 np0005592159 podman[266705]: 2026-01-22 14:59:32.600675794 +0000 UTC m=+0.097190509 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 09:59:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:32.623+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:32 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:32 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:32 np0005592159 podman[266772]: 2026-01-22 14:59:32.813200949 +0000 UTC m=+0.047342778 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, vcs-type=git, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, distribution-scope=public, io.buildah.version=1.28.2, version=2.2.4, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.openshift.tags=Ceph keepalived, architecture=x86_64, name=keepalived)
Jan 22 09:59:32 np0005592159 podman[266772]: 2026-01-22 14:59:32.828529687 +0000 UTC m=+0.062671516 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, distribution-scope=public, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, version=2.2.4, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, description=keepalived for Ceph)
Jan 22 09:59:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:59:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:32.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:59:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:33.640+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:33 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:33 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 09:59:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:34.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:34.667+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:34 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:34.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:34 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 09:59:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:35.642+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:35 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:35 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:35 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:36.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:36.638+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:36 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:36.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:36 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:37.627+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:37 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:37 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:38.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:38 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:59:38.551 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 09:59:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:38.621+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:38 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:38.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:39 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:39.589+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:39 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:40 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:40 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:59:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:40.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:59:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:40.549+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:40 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:40.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:41 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 09:59:41 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:41.508+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:41 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:42 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:42.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:42 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:42.492+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:42.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:43 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:43.523+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:43 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:44 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:44.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:44.571+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:44 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:44.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:45 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:45 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:45.615+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:45 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:46 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 09:59:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:46.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 09:59:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:46.568+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:46 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:46.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:47 np0005592159 podman[267046]: 2026-01-22 14:59:47.036298275 +0000 UTC m=+0.089224248 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 09:59:47 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:59:47.237 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 09:59:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:59:47.237 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 09:59:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 14:59:47.237 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 09:59:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:47.606+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:47 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:48 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:48.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:48.617+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:48 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:48.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:49 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:49 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:49.642+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:50 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:50 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:50.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:50 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:50.645+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:50.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:51 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:51 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:51.616+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:52 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:52.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:52 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:52.577+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:52.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:53 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:53.605+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:53 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:54 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:54.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:54.632+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:54 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:54.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:55 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:55 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 09:59:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:55.586+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:55 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 09:59:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:56.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 09:59:56 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:56.582+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:56 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:56.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:57 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:57.613+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:57 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:14:59:58.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:58 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:58.637+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:58 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 09:59:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 09:59:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:14:59:58.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 09:59:59 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 09:59:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 09:59:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T14:59:59.685+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:59 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 09:59:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:00 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:00 np0005592159 ceph-mon[77081]: Health detail: HEALTH_WARN 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops
Jan 22 10:00:00 np0005592159 ceph-mon[77081]: [WRN] SLOW_OPS: 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops
Jan 22 10:00:00 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:00.654+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:00 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:00.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:01 np0005592159 podman[267124]: 2026-01-22 15:00:01.055475765 +0000 UTC m=+0.112469426 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:00:01 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:01.671+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:01 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:02.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:02 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:02.675+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:02 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:02.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:03 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:03.632+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:03 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:00:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:04.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:00:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:04 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:04.650+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:04 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:04.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:05.647+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:05 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:05 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:05 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4993 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:00:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:06.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:00:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:06.616+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:06 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:06 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:06.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:07.615+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:07 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:07 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:08.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:08.651+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:08 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:08 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:08.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:09.627+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:09 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:09 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:10.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:10.662+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:10 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:10 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:10 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 4998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:10.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:11.655+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:11 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:11 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:12.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:12.691+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:12 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:12.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:12 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:13.696+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:13 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:13 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:00:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:14.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:14.739+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:14 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:14.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:14 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:15.708+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:15 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:16 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:16 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 5003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:16.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:16.687+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:16 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:16.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:17 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:17 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:17 np0005592159 podman[267210]: 2026-01-22 15:00:17.533159601 +0000 UTC m=+0.082614803 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 10:00:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:17.682+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:17 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:18.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:18 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:00:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1825904486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:00:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:00:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1825904486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:00:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:18.661+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:18 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:18.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:19 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:19.685+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:19 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:20 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:20 np0005592159 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 5008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:20.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:20.707+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:20 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:20.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:21 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:21.729+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:21 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:22 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:22.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:22.734+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:22 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:22.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:23 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:23.692+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:23 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:24 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:24.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:24.659+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:24 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:24.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:25 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:25 np0005592159 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 5013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:25.613+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:25 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:26.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:26 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:26.606+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:26 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:26.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:27 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:27 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:00:27 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:00:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:27.591+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:27 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:28.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:28 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:28.557+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:28 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:00:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:28.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:00:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:29.527+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:29 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:29 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:30.495+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:30 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:30.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:30 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:30 np0005592159 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 5018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:30.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:31.526+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:31 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:31 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:32 np0005592159 podman[267237]: 2026-01-22 15:00:32.060726107 +0000 UTC m=+0.113018573 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 22 10:00:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:32.493+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:32 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:32.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:32 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:32.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:33.502+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:33 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:33 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:33 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:00:33.906 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:00:33 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:00:33.907 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:00:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:34.508+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:34 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:34.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:34 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:34.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:35.492+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:35 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:35 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:35 np0005592159 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 5023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:36.510+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:36 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:00:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:36.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:00:36 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:36.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:37.510+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:37 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:37 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:38.501+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:38 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:38.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:38 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:38.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:39.462+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:39 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:39 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:40.481+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:40 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:40.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:40.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:40 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:40 np0005592159 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 5028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:41.518+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:41 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:41 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:00:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:00:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:00:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:42.519+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:42 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:42.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:42.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:42 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:43.527+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:43 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:43 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:00:43.909 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:00:43 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 35 ])
Jan 22 10:00:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:44.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:44.543+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:44 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:44.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:44 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:45.555+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:45 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:45 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:45 np0005592159 ceph-mon[77081]: Health check update: 54 slow ops, oldest one blocked for 5032 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:46.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:46.543+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:46 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:46.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:47 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:00:47.238 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:00:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:00:47.238 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:00:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:00:47.238 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:00:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:47.567+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:47 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:47 np0005592159 podman[267452]: 2026-01-22 15:00:47.995972212 +0000 UTC m=+0.054783228 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 10:00:48 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:48 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:00:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:00:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:48.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:48.574+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:48 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:48.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:49 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:49.587+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:49 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:50 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:50 np0005592159 ceph-mon[77081]: Health check update: 92 slow ops, oldest one blocked for 5038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:50.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:50.622+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:50 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:00:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:50.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:00:51 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:51.660+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:51 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:52 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:52.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:52.612+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:52 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:00:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:52.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:00:53 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:53.635+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:53 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:54 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:54.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:54.681+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:54 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:54.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:55 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:55 np0005592159 ceph-mon[77081]: Health check update: 92 slow ops, oldest one blocked for 5043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:00:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:55.718+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:55 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:56 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:56.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:56.678+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:56 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:56.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:57 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:57.683+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:57 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:00:58.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:58 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:58.650+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:58 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 92 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:00:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:00:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:00:58.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:00:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:00:59 np0005592159 ceph-mon[77081]: 92 slow requests (by type [ 'delayed' : 92 ] most affected pool [ 'vms' : 63 ])
Jan 22 10:00:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:00:59.620+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 88 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:59 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 88 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:00:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:00.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:00.585+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 88 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:00 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 88 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:00 np0005592159 ceph-mon[77081]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:00 np0005592159 ceph-mon[77081]: Health check update: 92 slow ops, oldest one blocked for 5048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:00.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:01.555+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 88 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:01 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 88 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:01 np0005592159 ceph-mon[77081]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:02 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:02.520+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:02.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:02 np0005592159 ceph-mon[77081]: 88 slow requests (by type [ 'delayed' : 88 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:02.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:03 np0005592159 podman[267590]: 2026-01-22 15:01:03.080198414 +0000 UTC m=+0.133898822 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 10:01:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:03.476+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:03 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:03 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:04.454+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:04 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:04.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:04 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:04.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:05.503+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:05 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:05 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:05 np0005592159 ceph-mon[77081]: Health check update: 88 slow ops, oldest one blocked for 5053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:06.495+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:06 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:06.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:06 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:06.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:07.500+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:07 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:07 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:08.489+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:08 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:08.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:08.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:08 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:09.493+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:09 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:09 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:10.457+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:10 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:10.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:10.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:10 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:10 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 5057 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:11.440+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:11 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:11 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:12.440+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:12 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:12.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:12.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:13 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:13.423+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:13 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:14 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:14.383+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:14 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:14.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:14.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:15 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:15.421+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:15 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:16 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:16 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 5063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:16 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:16.447+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:16 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:16.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:16.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:17 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:17.470+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:17 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:18 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:18.474+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:18 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:18.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:18.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:19 np0005592159 podman[267675]: 2026-01-22 15:01:19.035356873 +0000 UTC m=+0.081382352 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:01:19 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:19.425+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:19 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:20 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:20 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 5067 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:20.410+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:20 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:20.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:20.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:21 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:21.370+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:21 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:22 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:22.366+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:22 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:22.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:22.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:23 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:23.377+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:23 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:24 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:24.399+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:24 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:24.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:24.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:25 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:25 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 5072 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:25.362+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:25 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:26 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:26.394+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:26 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:01:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:26.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:01:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:26.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:27 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:27.382+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:27 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:28 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:28.361+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:28 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:28.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:28.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:29 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:29.408+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:29 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:30.431+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:30 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:30.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:30 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:30 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 5077 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:30.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:31.433+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:31 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 9 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:31 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:32.408+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:32 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:32.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:32 np0005592159 ceph-mon[77081]: 9 slow requests (by type [ 'delayed' : 9 ] most affected pool [ 'vms' : 7 ])
Jan 22 10:01:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:32.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:33.440+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:33 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:33 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:34 np0005592159 podman[267702]: 2026-01-22 15:01:34.070214876 +0000 UTC m=+0.129420628 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 22 10:01:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:34.487+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:34 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:34.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:34.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:35 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:35.450+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:35 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:36 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:36 np0005592159 ceph-mon[77081]: Health check update: 9 slow ops, oldest one blocked for 5082 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:36.406+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:36 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:01:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:36.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:01:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:36.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:37 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:37.446+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:37 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:38 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:38.479+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:38 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:38.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:38.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:39.473+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:39 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:39 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:39 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:40.476+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:40 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:01:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:40.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:01:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:40.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:41 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:41 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 5088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:41.469+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:41 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:42 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:42 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:42.507+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:42 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:42.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:42.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:43 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:43.485+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:43 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:44 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:44.513+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:44 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:44.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:44.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:45 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:45 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 5092 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:45 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:45.535+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:46 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:46 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:46.557+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:01:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:46.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:01:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:46.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:47 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:01:47.238 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:01:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:01:47.239 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:01:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:01:47.239 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:01:47 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:47.600+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:48 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:48 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:48.623+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:01:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:48.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:01:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:48.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:49 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:49.618+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:49 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:49 np0005592159 podman[267917]: 2026-01-22 15:01:49.98672848 +0000 UTC m=+0.052066459 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:01:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:01:50 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:01:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:01:50 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 5098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:50 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:01:50.436 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:01:50 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:01:50.437 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:01:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:50.593+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:50 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:50.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:50.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:51 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:51.589+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:51 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:52 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:52 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:01:52.438 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:01:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:52.575+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:52 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:52.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:52.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:53 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:53.623+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:53 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:54 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:54.651+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:54 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:01:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:54.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:01:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:54.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:55 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:55 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 5103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:01:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:55.627+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:55 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 20 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:56 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:01:56 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:01:56 np0005592159 ceph-mon[77081]: 20 slow requests (by type [ 'delayed' : 20 ] most affected pool [ 'vms' : 14 ])
Jan 22 10:01:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:56.611+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:56 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:56.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:56.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:57.578+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:57 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:57 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:58.589+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:58 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:58 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:01:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:01:58.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:01:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:01:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:01:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:01:58.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:01:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:01:59.605+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:59 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:01:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:01:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:01:59 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:00.588+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:00 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:00 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:00 np0005592159 ceph-mon[77081]: Health check update: 20 slow ops, oldest one blocked for 5108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:00.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:00.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:01.541+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:01 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #175. Immutable memtables: 0.
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.908775) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 175
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121908862, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 2454, "num_deletes": 251, "total_data_size": 4783821, "memory_usage": 4875248, "flush_reason": "Manual Compaction"}
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #176: started
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121947764, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 176, "file_size": 3108278, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 83725, "largest_seqno": 86174, "table_properties": {"data_size": 3099212, "index_size": 5239, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 23322, "raw_average_key_size": 21, "raw_value_size": 3079129, "raw_average_value_size": 2822, "num_data_blocks": 225, "num_entries": 1091, "num_filter_entries": 1091, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769093958, "oldest_key_time": 1769093958, "file_creation_time": 1769094121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 39043 microseconds, and 7998 cpu microseconds.
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.947831) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #176: 3108278 bytes OK
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.947851) [db/memtable_list.cc:519] [default] Level-0 commit table #176 started
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.958601) [db/memtable_list.cc:722] [default] Level-0 commit table #176: memtable #1 done
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.958624) EVENT_LOG_v1 {"time_micros": 1769094121958618, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.958644) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 4772716, prev total WAL file size 4772716, number of live WAL files 2.
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000172.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.959986) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [176(3035KB)], [174(8966KB)]
Jan 22 10:02:01 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094121960061, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [176], "files_L6": [174], "score": -1, "input_data_size": 12289802, "oldest_snapshot_seqno": -1}
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #177: 13401 keys, 10597195 bytes, temperature: kUnknown
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094122053099, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 177, "file_size": 10597195, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10525771, "index_size": 36815, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33541, "raw_key_size": 368994, "raw_average_key_size": 27, "raw_value_size": 10299091, "raw_average_value_size": 768, "num_data_blocks": 1325, "num_entries": 13401, "num_filter_entries": 13401, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094121, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 177, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.053414) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 10597195 bytes
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.059191) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.0 rd, 113.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 8.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 13918, records dropped: 517 output_compression: NoCompression
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.059210) EVENT_LOG_v1 {"time_micros": 1769094122059201, "job": 112, "event": "compaction_finished", "compaction_time_micros": 93102, "compaction_time_cpu_micros": 33675, "output_level": 6, "num_output_files": 1, "total_output_size": 10597195, "num_input_records": 13918, "num_output_records": 13401, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094122059798, "job": 112, "event": "table_file_deletion", "file_number": 176}
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000174.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094122061531, "job": 112, "event": "table_file_deletion", "file_number": 174}
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:01.959871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.061574) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.061579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.061581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.061583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:02:02.061585) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:02:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:02.562+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:02 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:02.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:02 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:02.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:03.585+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:03 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:04 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:04.566+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:04 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:04.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:04.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:05 np0005592159 podman[268045]: 2026-01-22 15:02:05.035796893 +0000 UTC m=+0.095672723 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 10:02:05 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:05.560+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:05 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:06 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:06 np0005592159 ceph-mon[77081]: Health check update: 86 slow ops, oldest one blocked for 5113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:06.558+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:06 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:06.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:06.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:07 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:07 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:07.588+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:07 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:08 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:08.598+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:08 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:02:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:08.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:02:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:08.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:09.581+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:09 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:09 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:10.614+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:10 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:10.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:10.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:11 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:11 np0005592159 ceph-mon[77081]: Health check update: 86 slow ops, oldest one blocked for 5118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:11.592+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:11 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:12 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:12.615+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:12 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:12.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:02:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:12.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:02:13 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:13 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:13.654+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:13 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:14.606+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:14 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:14.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:14 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:14.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:15 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:15.599+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:16 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:16 np0005592159 ceph-mon[77081]: Health check update: 86 slow ops, oldest one blocked for 5123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:16 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:16.624+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:16.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:16.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:17 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:17 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:17.634+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:18 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:18.610+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:18 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:18.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:19.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:19 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:19 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:19 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:19.614+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:20 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:20.603+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:20.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:20 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:20 np0005592159 ceph-mon[77081]: Health check update: 86 slow ops, oldest one blocked for 5128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:20 np0005592159 podman[268129]: 2026-01-22 15:02:20.985180239 +0000 UTC m=+0.050769397 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 10:02:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:21.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:21 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:21.579+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:22 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:22 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:22.596+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:22.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:23.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:23 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:23 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:23 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:23.638+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:24 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:24 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:24.645+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:24.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:25.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:25 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:25 np0005592159 ceph-mon[77081]: Health check update: 86 slow ops, oldest one blocked for 5133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:25 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:25.641+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:26 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:26.666+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:26 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:26.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:27.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:27 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:27.654+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:28 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:28.692+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:28.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:28 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:29.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:29 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:29.701+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:29 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:29 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:30 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:30.740+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:30.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:30 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:30 np0005592159 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 5138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:31.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:31.704+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 28 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:31 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 28 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 28 slow requests (by type [ 'delayed' : 28 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:02:31 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:32.684+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:32 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:32.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:33.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:33 np0005592159 ceph-mon[77081]: 28 slow requests (by type [ 'delayed' : 28 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:02:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:33.692+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:33 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:34 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:34 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:34.715+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:34 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:34.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:35.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:35 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:35 np0005592159 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 5143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:35.692+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:35 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:36 np0005592159 podman[268155]: 2026-01-22 15:02:36.012387481 +0000 UTC m=+0.078609825 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 10:02:36 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:36.681+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:36 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:36.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:37.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:37 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:37.658+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:37 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:38 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:38.636+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:38 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:38.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:39.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:39 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:39.613+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 87 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:39 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 87 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 87 slow requests (by type [ 'delayed' : 87 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:40.648+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 34 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:40 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 34 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 34 slow requests (by type [ 'delayed' : 34 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:02:40 np0005592159 ceph-mon[77081]: 87 slow requests (by type [ 'delayed' : 87 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:02:40 np0005592159 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 5148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:40.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:41.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:41.645+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 89 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:41 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 89 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 89 slow requests (by type [ 'delayed' : 89 ] most affected pool [ 'vms' : 60 ])
Jan 22 10:02:42 np0005592159 ceph-mon[77081]: 34 slow requests (by type [ 'delayed' : 34 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:02:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:42.693+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:42 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:42.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:43.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:43 np0005592159 ceph-mon[77081]: 89 slow requests (by type [ 'delayed' : 89 ] most affected pool [ 'vms' : 60 ])
Jan 22 10:02:43 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:43.658+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:43 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:44.625+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:44 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:44 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:44.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:45.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e169 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:45.638+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:45 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:45 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:45 np0005592159 ceph-mon[77081]: Health check update: 87 slow ops, oldest one blocked for 5153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:46.662+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:46 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:02:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:46.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:02:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:47.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:47 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:02:47.239 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:02:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:02:47.240 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:02:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:02:47.240 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:02:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:47.675+0000 7f47f8ed4640 -1 osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:47 np0005592159 ceph-osd[79779]: osd.2 169 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e170 e170: 3 total, 3 up, 3 in
Jan 22 10:02:48 np0005592159 ceph-osd[79779]: osd.2 170 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:48.674+0000 7f47f8ed4640 -1 osd.2 170 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:48.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:49.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e171 e171: 3 total, 3 up, 3 in
Jan 22 10:02:49 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:49 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:49.685+0000 7f47f8ed4640 -1 osd.2 171 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:49 np0005592159 ceph-osd[79779]: osd.2 171 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:50.637+0000 7f47f8ed4640 -1 osd.2 171 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:50 np0005592159 ceph-osd[79779]: osd.2 171 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:50.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:51.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:51 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:51 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:51 np0005592159 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 5158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e172 e172: 3 total, 3 up, 3 in
Jan 22 10:02:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:51.633+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:51 np0005592159 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 62 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 41 ])
Jan 22 10:02:51 np0005592159 podman[268242]: 2026-01-22 15:02:51.992823839 +0000 UTC m=+0.051090775 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 10:02:52 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:02:52 np0005592159 ceph-mon[77081]: 62 slow requests (by type [ 'delayed' : 62 ] most affected pool [ 'vms' : 41 ])
Jan 22 10:02:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:52.666+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:52 np0005592159 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:52.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:02:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:53.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:02:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:53.701+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:53 np0005592159 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:54 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:54.723+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:54 np0005592159 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:54.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:54 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:55.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:02:55.270 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:02:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:02:55.270 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:02:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:02:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:55.756+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:55 np0005592159 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:56 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:56 np0005592159 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 5163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:02:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:56.719+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:56 np0005592159 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:56.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:57.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:57.761+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:57 np0005592159 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:58 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:02:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:02:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:58.811+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:58 np0005592159 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:58 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:02:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:02:58 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:02:58.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:02:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:02:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:02:59.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:02:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:02:59.804+0000 7f47f8ed4640 -1 osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:59 np0005592159 ceph-osd[79779]: osd.2 172 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:02:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:02:59 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 e173: 3 total, 3 up, 3 in
Jan 22 10:03:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:00.834+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:00 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:00 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:00 np0005592159 ceph-mon[77081]: Health check update: 82 slow ops, oldest one blocked for 5168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:00.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:01.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:01 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:03:01.271 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:03:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:01.810+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:01 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:01 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:02.829+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:02 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:02 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:02.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:03.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:03.789+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:03 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:03 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:03:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:03:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:04.741+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:04 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:04.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:04 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:05.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:05.765+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:05 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:06 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:06 np0005592159 ceph-mon[77081]: Health check update: 82 slow ops, oldest one blocked for 5173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:06.725+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:06 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:06.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:07.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:07 np0005592159 podman[268499]: 2026-01-22 15:03:07.075327851 +0000 UTC m=+0.124508871 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 10:03:07 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:07 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:07.687+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:07 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:08 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:08 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:08.733+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:08.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:09.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:09 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:09.729+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:09 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:10 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:10 np0005592159 ceph-mon[77081]: Health check update: 82 slow ops, oldest one blocked for 5178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:10.726+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:10 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:10.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:11.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:11 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:11.763+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:11 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:12 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:12.765+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:12 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:12.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:13.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:13 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:13.763+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:13 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:14.729+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:14 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:14 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:14.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:15.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:15.680+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:15 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:15 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:15 np0005592159 ceph-mon[77081]: Health check update: 82 slow ops, oldest one blocked for 5183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:16.702+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:16 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:16 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:16.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:17.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:17.670+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:17 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:17 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:03:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4050759706' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:03:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:03:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4050759706' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:03:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:18.659+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:18 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:18 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:18.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:19.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:19.682+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:19 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:20 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:20.646+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:20 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:03:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:20.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:03:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:21.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:21 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:21 np0005592159 ceph-mon[77081]: Health check update: 82 slow ops, oldest one blocked for 5188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:21 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:21.666+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:21 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:22 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:03:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:22.687+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:22 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:22.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:22 np0005592159 podman[268584]: 2026-01-22 15:03:22.98764064 +0000 UTC m=+0.042200875 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:03:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:23.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:23.677+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:23 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:24 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:24 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:24.647+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:24 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:24.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:25.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:25 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:25 np0005592159 ceph-mon[77081]: Health check update: 82 slow ops, oldest one blocked for 5193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:25.671+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:25 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:26 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:26.677+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:26 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:26.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:03:27 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/150705854' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:03:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:03:27 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/150705854' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:03:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:27.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:27 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:27.663+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:27 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:28 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:28.644+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:28 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:28.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:29.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:29 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:29.654+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:29 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #178. Immutable memtables: 0.
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.088735) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 178
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210088782, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1382, "num_deletes": 257, "total_data_size": 2543602, "memory_usage": 2590024, "flush_reason": "Manual Compaction"}
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #179: started
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210106611, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 179, "file_size": 1671345, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86179, "largest_seqno": 87556, "table_properties": {"data_size": 1665616, "index_size": 2868, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14497, "raw_average_key_size": 20, "raw_value_size": 1653115, "raw_average_value_size": 2361, "num_data_blocks": 124, "num_entries": 700, "num_filter_entries": 700, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094122, "oldest_key_time": 1769094122, "file_creation_time": 1769094210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 17948 microseconds, and 7796 cpu microseconds.
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.106683) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #179: 1671345 bytes OK
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.106712) [db/memtable_list.cc:519] [default] Level-0 commit table #179 started
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108511) [db/memtable_list.cc:722] [default] Level-0 commit table #179: memtable #1 done
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108536) EVENT_LOG_v1 {"time_micros": 1769094210108529, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.108559) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2536863, prev total WAL file size 2536863, number of live WAL files 2.
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000175.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.109872) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303232' seq:72057594037927935, type:22 .. '6C6F676D0034323735' seq:0, type:0; will stop at (end)
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [179(1632KB)], [177(10MB)]
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210109967, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [179], "files_L6": [177], "score": -1, "input_data_size": 12268540, "oldest_snapshot_seqno": -1}
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #180: 13570 keys, 12122036 bytes, temperature: kUnknown
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210223873, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 180, "file_size": 12122036, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12047922, "index_size": 39057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 33989, "raw_key_size": 374170, "raw_average_key_size": 27, "raw_value_size": 11816645, "raw_average_value_size": 870, "num_data_blocks": 1415, "num_entries": 13570, "num_filter_entries": 13570, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094210, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 180, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.224300) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 12122036 bytes
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.226217) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 107.6 rd, 106.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 10.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(14.6) write-amplify(7.3) OK, records in: 14101, records dropped: 531 output_compression: NoCompression
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.226251) EVENT_LOG_v1 {"time_micros": 1769094210226235, "job": 114, "event": "compaction_finished", "compaction_time_micros": 113998, "compaction_time_cpu_micros": 58778, "output_level": 6, "num_output_files": 1, "total_output_size": 12122036, "num_input_records": 14101, "num_output_records": 13570, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210226981, "job": 114, "event": "table_file_deletion", "file_number": 179}
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000177.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094210230588, "job": 114, "event": "table_file_deletion", "file_number": 177}
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.109730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.230638) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.230645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.230648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.230651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:03:30.230654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:30 np0005592159 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:30.654+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:30 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:30.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:31.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:31 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:31.631+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:31 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:32 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:32.673+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:32 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:32.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:33.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:33 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:03:33.239 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:03:33 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:03:33.241 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:03:33 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:33.690+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:33 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:34 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:34.691+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:34 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:34.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:35.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:35 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:35 np0005592159 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:35.717+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:35 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:36.730+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:36 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:36 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:36.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:37.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:37.741+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:37 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:37 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:38 np0005592159 podman[268611]: 2026-01-22 15:03:38.063779648 +0000 UTC m=+0.121479416 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:03:38 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:03:38.243 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:03:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:38.709+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:38 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:38 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:38.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:39.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:39.673+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:39 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:39 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:40.691+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:40 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:40 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:40 np0005592159 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:40.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:41.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:41.699+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:41 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:41 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:42.655+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:42 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:42 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:42.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:43.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:43.631+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:43 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:43 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:44.679+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:44 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:44 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:44.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:45.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:45.717+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:45 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:45 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:45 np0005592159 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5213 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:46.689+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:46 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:46 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:46.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:47.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:03:47.240 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:03:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:03:47.240 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:03:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:03:47.240 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:03:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:47.732+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:47 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:47 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:48.728+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:48 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:48 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:49.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:49.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:49.774+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:49 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:50 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:50.769+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:50 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:51.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:51.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:51 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:51 np0005592159 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:51 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:51.764+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:51 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:52 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:52.747+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:52 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:53.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:53.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:53 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:53.782+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:53 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:54 np0005592159 podman[268696]: 2026-01-22 15:03:54.04841778 +0000 UTC m=+0.097320818 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 22 10:03:54 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:54.770+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:54 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:55.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:55.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:55 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:55 np0005592159 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:03:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:03:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:55.721+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:55 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:56 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:56.752+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:56 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:57.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:57.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:57 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:57.707+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:57 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:58 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:58.664+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:58 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:03:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:03:59.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:03:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:03:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:03:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:03:59.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:03:59 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:03:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:03:59.714+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:59 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:03:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:00 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:00 np0005592159 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:00.673+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:00 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:01.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:04:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:01.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:04:01 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:01.658+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:01 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:02 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:02.662+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:02 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:03.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:03.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:03 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:03.700+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:03 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:04 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:04.731+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:04 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:05.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:05.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:05 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:05 np0005592159 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:05.769+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:05 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:06 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:04:06 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:06 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:04:06 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:06.740+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:06 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:07.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:07 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:07.734+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:07 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:08 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:08.710+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:08 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:09 np0005592159 podman[268906]: 2026-01-22 15:04:09.015185702 +0000 UTC m=+0.080589414 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 10:04:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:09.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:09.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:09 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:09.725+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:09 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:10 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:10 np0005592159 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:10.735+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:10 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:11.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:11.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:11 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:11.754+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:11 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:12 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:04:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:12.737+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:12 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:13.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:13.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:13 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:13.728+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:13 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:14 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:14.715+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:14 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:15.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:04:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:15.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:04:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:15 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:15 np0005592159 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:15.676+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:15 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:16 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:16.704+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:16 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:17.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:04:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:17.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:04:17 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:17.715+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:17 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:18 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:18.733+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:18 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:19.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:19.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:19 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:19.714+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:19 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:20 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:20 np0005592159 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5248 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:20.762+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:20 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:21.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:21.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:21 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:21.800+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:21 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:22 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:22.813+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:22 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 99 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:04:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:23.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:04:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000024s ======
Jan 22 10:04:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:23.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 22 10:04:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:23.832+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:23 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:23 np0005592159 ceph-mon[77081]: 99 slow requests (by type [ 'delayed' : 99 ] most affected pool [ 'vms' : 67 ])
Jan 22 10:04:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:24.794+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:24 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:24 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:25 np0005592159 podman[269040]: 2026-01-22 15:04:25.025965068 +0000 UTC m=+0.060936848 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:04:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:25.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:25.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:25.807+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:25 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:25 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:25 np0005592159 ceph-mon[77081]: Health check update: 99 slow ops, oldest one blocked for 5253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:26.774+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:26 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:26 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:27.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:27.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:27.750+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:27 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:27 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:28.700+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:28 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:28 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:29.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:29.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:29.678+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:29 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:29 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:30.717+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:30 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:31.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:31.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:31 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:31 np0005592159 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 5258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:31.687+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:31 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:32 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:32 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:32.673+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:32 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:04:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:33.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:04:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:33.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:33 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:33.643+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:33 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:34 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:34.677+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:34 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:35.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:35.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:35 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:35 np0005592159 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 5263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:35.703+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:35 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:36 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:36.693+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:36 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:37.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:37.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:37 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:37.686+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:37 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:38 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:38.713+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:38 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:39.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:39.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:39 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:39.674+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:39 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:40 np0005592159 podman[269117]: 2026-01-22 15:04:40.036996121 +0000 UTC m=+0.094485247 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 22 10:04:40 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:40 np0005592159 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 5268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:40.693+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:40 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:41.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:41.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:41 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:41.700+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:41 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:42 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:42.653+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:42 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:43.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:43.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:43 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:43.610+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:43 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:44 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:44.595+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:44 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:45.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:45.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:45 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:45 np0005592159 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 5273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:45.624+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:45 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:46 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:46.640+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:46 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:47.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:47.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:04:47.241 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:04:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:04:47.241 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:04:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:04:47.241 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:04:47 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:47.603+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:47 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:48 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:48.571+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:48 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:49.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:49.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #181. Immutable memtables: 0.
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.512571) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 181
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289512653, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1345, "num_deletes": 251, "total_data_size": 2360734, "memory_usage": 2388928, "flush_reason": "Manual Compaction"}
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #182: started
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289522582, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 182, "file_size": 1539104, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 87561, "largest_seqno": 88901, "table_properties": {"data_size": 1533722, "index_size": 2585, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13910, "raw_average_key_size": 20, "raw_value_size": 1522007, "raw_average_value_size": 2275, "num_data_blocks": 111, "num_entries": 669, "num_filter_entries": 669, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094210, "oldest_key_time": 1769094210, "file_creation_time": 1769094289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 10068 microseconds, and 4805 cpu microseconds.
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.522651) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #182: 1539104 bytes OK
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.522670) [db/memtable_list.cc:519] [default] Level-0 commit table #182 started
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.524649) [db/memtable_list.cc:722] [default] Level-0 commit table #182: memtable #1 done
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.524666) EVENT_LOG_v1 {"time_micros": 1769094289524660, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.524686) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 2354236, prev total WAL file size 2354236, number of live WAL files 2.
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000178.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.525294) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [182(1503KB)], [180(11MB)]
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289525355, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [182], "files_L6": [180], "score": -1, "input_data_size": 13661140, "oldest_snapshot_seqno": -1}
Jan 22 10:04:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:49.589+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:49 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #183: 13722 keys, 11987527 bytes, temperature: kUnknown
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289615953, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 183, "file_size": 11987527, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11912712, "index_size": 39374, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34373, "raw_key_size": 378510, "raw_average_key_size": 27, "raw_value_size": 11679140, "raw_average_value_size": 851, "num_data_blocks": 1424, "num_entries": 13722, "num_filter_entries": 13722, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094289, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 183, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.616405) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 11987527 bytes
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.617699) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.5 rd, 132.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 11.6 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(16.7) write-amplify(7.8) OK, records in: 14239, records dropped: 517 output_compression: NoCompression
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.617721) EVENT_LOG_v1 {"time_micros": 1769094289617710, "job": 116, "event": "compaction_finished", "compaction_time_micros": 90753, "compaction_time_cpu_micros": 39280, "output_level": 6, "num_output_files": 1, "total_output_size": 11987527, "num_input_records": 14239, "num_output_records": 13722, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289618205, "job": 116, "event": "table_file_deletion", "file_number": 182}
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000180.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094289621081, "job": 116, "event": "table_file_deletion", "file_number": 180}
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.525230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.621154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.621160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.621162) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.621164) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:04:49.621166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:04:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:50 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:50 np0005592159 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 5278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:50.590+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:50 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:51.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:04:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:51.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:04:51 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:51 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:51.576+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:52 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:52.622+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:52 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 17 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:53.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:53.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:53 np0005592159 ceph-mon[77081]: 17 slow requests (by type [ 'delayed' : 17 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:04:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:53.639+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:53 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:04:54 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:04:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:54.641+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:54 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:04:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:55.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:55.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:04:55 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:04:55 np0005592159 ceph-mon[77081]: Health check update: 17 slow ops, oldest one blocked for 5283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:04:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:55.683+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:55 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 82 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:04:56 np0005592159 podman[269153]: 2026-01-22 15:04:56.020867914 +0000 UTC m=+0.069562587 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 10:04:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:56.711+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:56 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:04:56 np0005592159 ceph-mon[77081]: 82 slow requests (by type [ 'delayed' : 82 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:04:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:04:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:57.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:04:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:57.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:57.669+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:57 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:04:57 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:04:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:58.672+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:58 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:04:58 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:04:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:04:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:04:59.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:04:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:04:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:04:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:04:59.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:04:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:04:59.705+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:59 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:04:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:04:59 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:00.733+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:00 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:00 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:00 np0005592159 ceph-mon[77081]: Health check update: 82 slow ops, oldest one blocked for 5288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:01.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:01.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:01.760+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:01 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:01 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:02.748+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:02 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:02 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:03.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:03.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:03.766+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:03 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:03 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:04.786+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:04 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:04 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:05.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:05.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:05.831+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:05 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:05 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:05 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 5293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:06.792+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:06 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:06 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:07.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:07.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:07.833+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:07 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:07 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:08.812+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:08 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:08 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:09.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:09.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:09.793+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:09 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:09 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:10.759+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:10 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:10 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:10 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 5298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:11 np0005592159 podman[269232]: 2026-01-22 15:05:11.018257496 +0000 UTC m=+0.085297357 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:05:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:11.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:11.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:11.754+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:11 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:12 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:12.710+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:12 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:13 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:13.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:13.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:13.679+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:13 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:14 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:14.714+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:14 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:15.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:15.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:15 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:15 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:15 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:15.730+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:15 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:16 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:16 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 5303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:16 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:16.778+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:16 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:17.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:17.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:17 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:05:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:05:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:17.791+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:17 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:18.835+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:18 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:19 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:19.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:19.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:19.880+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:19 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:20 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:20.891+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:20 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:21.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:21.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:21 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:21 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 5308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:21 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:21.918+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:21 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:22 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:22.878+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:22 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:23.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:23.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:23 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:23.900+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:23 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:24 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:24 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:05:24 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:24.927+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:24 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:25.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:25.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:25 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:25 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 5313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:25.900+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:25 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 19 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:26 np0005592159 ceph-mon[77081]: 19 slow requests (by type [ 'delayed' : 19 ] most affected pool [ 'vms' : 12 ])
Jan 22 10:05:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:26.923+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:26 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:26 np0005592159 podman[269615]: 2026-01-22 15:05:26.987356594 +0000 UTC m=+0.052216173 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 10:05:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:27.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:27.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:27 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:27.875+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:27 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:28 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:28.915+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:28 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:29 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:05:29 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 16K writes, 89K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s#012Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.15 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1920 writes, 9811 keys, 1920 commit groups, 1.0 writes per commit group, ingest: 16.93 MB, 0.03 MB/s#012Interval WAL: 1920 writes, 1920 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     85.5      1.13              0.35        58    0.020       0      0       0.0       0.0#012  L6      1/0   11.43 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.6    134.8    116.5      4.65              1.82        57    0.082    549K    30K       0.0       0.0#012 Sum      1/0   11.43 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.6    108.5    110.4      5.79              2.18       115    0.050    549K    30K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5     95.0     96.5      0.86              0.32        14    0.061     95K   3607       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    134.8    116.5      4.65              1.82        57    0.082    549K    30K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     85.8      1.13              0.35        57    0.020       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.094, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.62 GB write, 0.12 MB/s write, 0.61 GB read, 0.12 MB/s read, 5.8 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 0.9 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 67.51 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000434 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3555,64.10 MB,21.087%) FilterBlock(115,1.48 MB,0.487152%) IndexBlock(115,1.93 MB,0.633526%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:05:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:29.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:29.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:29 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:29.885+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:29 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:30 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:30 np0005592159 ceph-mon[77081]: Health check update: 19 slow ops, oldest one blocked for 5318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:30.881+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:30 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:05:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:31.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:05:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:31.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:31 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:31.867+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:31 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:32.863+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:32 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:33 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:33.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:05:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:33.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:05:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:33.882+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:33 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:34 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:34.921+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:34 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:35.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:35.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:35 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:35 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:35 np0005592159 ceph-mon[77081]: Health check update: 102 slow ops, oldest one blocked for 5323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:35.963+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:35 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:36.979+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:36 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:37.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:37.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:37 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:37.939+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:37 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:38 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:38.918+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:38 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:39.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:39.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:39 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:39 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:39.926+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:39 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:40 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:40 np0005592159 ceph-mon[77081]: Health check update: 102 slow ops, oldest one blocked for 5328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:40.938+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:40 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:41.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:41.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:41.953+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:41 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:42 np0005592159 podman[269692]: 2026-01-22 15:05:42.081491501 +0000 UTC m=+0.129237623 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 10:05:42 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:42.995+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:42 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:43.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:43.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:43.954+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:43 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:44 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:44 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:44 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:44.920+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:44 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:45.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:45.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:45 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:45 np0005592159 ceph-mon[77081]: Health check update: 102 slow ops, oldest one blocked for 5333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:45.925+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:45 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:46 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:46.958+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:46 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:47.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:47.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:05:47.242 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:05:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:05:47.242 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:05:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:05:47.242 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:05:47 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:47.956+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:47 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:48 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:48.984+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:48 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:49.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:49.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:49 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:50.035+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:50 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:50 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:50 np0005592159 ceph-mon[77081]: Health check update: 102 slow ops, oldest one blocked for 5338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:50.992+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:50 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:51.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:51.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:51 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:52.000+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:52 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:53 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:53.035+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:53 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:53.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:53.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:54 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:54.086+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:54 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:55 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:55.088+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:55 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:55.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e4ed6f0 =====
Jan 22 10:05:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e4ed6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:55 np0005592159 radosgw[80769]: beast: 0x7f935e4ed6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:55.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:05:56 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:56 np0005592159 ceph-mon[77081]: Health check update: 102 slow ops, oldest one blocked for 5343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:05:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:56.133+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:56 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 102 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:57.109+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:57 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:05:57 np0005592159 ceph-mon[77081]: 102 slow requests (by type [ 'delayed' : 102 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:05:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:05:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:57.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:05:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:57.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:58 np0005592159 podman[269726]: 2026-01-22 15:05:58.030198234 +0000 UTC m=+0.081805146 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 10:05:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:58.100+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:58 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:05:58 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:05:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:05:59.133+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:59 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:05:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:05:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:05:59.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:05:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:05:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:05:59.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:05:59 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:00.085+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:00 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:00 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:00 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:00 np0005592159 ceph-mon[77081]: Health check update: 102 slow ops, oldest one blocked for 5348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:01.087+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:01 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:01.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:01.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:01 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:02.115+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:02 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:03.144+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:03 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:03.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:03.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:03 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:04.121+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:04 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:04 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:05.099+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:05 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:05.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:05.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:05 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:05 np0005592159 ceph-mon[77081]: Health check update: 84 slow ops, oldest one blocked for 5353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:06.142+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:06 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:06 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:07.104+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:07 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:07.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:07.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:07 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:07 np0005592159 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 10:06:08 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:08.102+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:08 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:09 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:09.133+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:09.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:09.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:09 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:10 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:10.184+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:10 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:10 np0005592159 ceph-mon[77081]: Health check update: 84 slow ops, oldest one blocked for 5358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:11.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:11.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:11 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:11.230+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:11 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:12 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:12.241+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:12 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:13 np0005592159 podman[269804]: 2026-01-22 15:06:13.080028824 +0000 UTC m=+0.119024597 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller)
Jan 22 10:06:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:13.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:13.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:13 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:13.272+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:13 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:14 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:14.260+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:14 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:15.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:15.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:15.297+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:15 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:15 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:15 np0005592159 ceph-mon[77081]: Health check update: 84 slow ops, oldest one blocked for 5363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:16.285+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:16 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:16 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:17.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:17.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:17.255+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:17 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:17 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:18.241+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:18 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:18 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:19.216+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:19 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:19.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:19.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:20.168+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:20 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #184. Immutable memtables: 0.
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.187834) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 184
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380187875, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 1478, "num_deletes": 250, "total_data_size": 2770066, "memory_usage": 2825512, "flush_reason": "Manual Compaction"}
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #185: started
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380200772, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 185, "file_size": 1191330, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88906, "largest_seqno": 90379, "table_properties": {"data_size": 1186443, "index_size": 2090, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14800, "raw_average_key_size": 21, "raw_value_size": 1174975, "raw_average_value_size": 1727, "num_data_blocks": 89, "num_entries": 680, "num_filter_entries": 680, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094290, "oldest_key_time": 1769094290, "file_creation_time": 1769094380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 12988 microseconds, and 6771 cpu microseconds.
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.200822) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #185: 1191330 bytes OK
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.200844) [db/memtable_list.cc:519] [default] Level-0 commit table #185 started
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.203189) [db/memtable_list.cc:722] [default] Level-0 commit table #185: memtable #1 done
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.203209) EVENT_LOG_v1 {"time_micros": 1769094380203202, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.203230) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 2763022, prev total WAL file size 2763022, number of live WAL files 2.
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000181.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.204843) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353037' seq:72057594037927935, type:22 .. '6D6772737461740032373538' seq:0, type:0; will stop at (end)
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [185(1163KB)], [183(11MB)]
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380204956, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [185], "files_L6": [183], "score": -1, "input_data_size": 13178857, "oldest_snapshot_seqno": -1}
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #186: 13924 keys, 9876587 bytes, temperature: kUnknown
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380301806, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 186, "file_size": 9876587, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9804398, "index_size": 36300, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34821, "raw_key_size": 383446, "raw_average_key_size": 27, "raw_value_size": 9571032, "raw_average_value_size": 687, "num_data_blocks": 1296, "num_entries": 13924, "num_filter_entries": 13924, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 186, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.302155) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 9876587 bytes
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.303701) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 136.0 rd, 101.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 11.4 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(19.4) write-amplify(8.3) OK, records in: 14402, records dropped: 478 output_compression: NoCompression
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.303738) EVENT_LOG_v1 {"time_micros": 1769094380303722, "job": 118, "event": "compaction_finished", "compaction_time_micros": 96937, "compaction_time_cpu_micros": 53246, "output_level": 6, "num_output_files": 1, "total_output_size": 9876587, "num_input_records": 14402, "num_output_records": 13924, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380304397, "job": 118, "event": "table_file_deletion", "file_number": 185}
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000183.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094380308610, "job": 118, "event": "table_file_deletion", "file_number": 183}
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.204693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.308760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.308770) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.308773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.308777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:06:20.308780) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:06:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:21.122+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:21 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:21 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:21 np0005592159 ceph-mon[77081]: Health check update: 84 slow ops, oldest one blocked for 5368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:21.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:21.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:22.080+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:22 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:22 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:22 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:22 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:23.111+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:23 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e4ed6f0 =====
Jan 22 10:06:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:23.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e4ed6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:23 np0005592159 radosgw[80769]: beast: 0x7f935e4ed6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:23.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:24.071+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:24 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:24 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:24 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:25.082+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:25 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:25.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:25.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:25 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:25 np0005592159 ceph-mon[77081]: Health check update: 84 slow ops, oldest one blocked for 5373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:25 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:25 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:26.064+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:26 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 84 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:26 np0005592159 ceph-mon[77081]: 84 slow requests (by type [ 'delayed' : 84 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:06:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:06:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:06:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:27.100+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:27 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:27.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:27.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:28.133+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:28 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:28 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:06:28.233 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:06:28 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:06:28.234 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:06:28 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:29 np0005592159 podman[270020]: 2026-01-22 15:06:29.028875672 +0000 UTC m=+0.081413085 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:06:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:29.182+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:29 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:29.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:29.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:29 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:30.232+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:30 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:30 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:30 np0005592159 ceph-mon[77081]: Health check update: 84 slow ops, oldest one blocked for 5378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:06:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.5 total, 600.0 interval#012Cumulative writes: 12K writes, 40K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 12K writes, 3884 syncs, 3.13 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1017 writes, 2173 keys, 1017 commit groups, 1.0 writes per commit group, ingest: 1.28 MB, 0.00 MB/s#012Interval WAL: 1017 writes, 473 syncs, 2.15 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:06:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:31.267+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:31 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:31.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:31.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:31 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:32.235+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:32 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:32 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:33.187+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:33 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:33.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:33.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:33 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:06:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:34.225+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:34 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:34 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:06:34.235 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:06:34 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:35.204+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:35 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:35.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:35.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:35 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:35 np0005592159 ceph-mon[77081]: Health check update: 103 slow ops, oldest one blocked for 5383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:36.209+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:36 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:37 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:37.250+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:37 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:37.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:37.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:38 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:38.263+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:38 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:39 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:39.288+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:39 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:39.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:39.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:40 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:40.253+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:40 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:41 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:41 np0005592159 ceph-mon[77081]: Health check update: 103 slow ops, oldest one blocked for 5388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:41 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:41.210+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:41 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:41.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:41.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:42 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:42.244+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:42 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:43 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:43.231+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:43 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:43.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:43.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:44 np0005592159 podman[270146]: 2026-01-22 15:06:44.055061431 +0000 UTC m=+0.109107007 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 10:06:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:44.229+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:44 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:44 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:45.199+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:45 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:45.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:45.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:45 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:45 np0005592159 ceph-mon[77081]: Health check update: 103 slow ops, oldest one blocked for 5393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:46.209+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:46 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 103 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:46 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:47.184+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:47 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:06:47.243 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:06:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:06:47.243 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:06:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:06:47.243 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:06:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:47.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:47.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:47 np0005592159 ceph-mon[77081]: 103 slow requests (by type [ 'delayed' : 103 ] most affected pool [ 'vms' : 68 ])
Jan 22 10:06:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:48.153+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:48 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:48 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:49.182+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:49 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:49.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:49.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:49 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:50.144+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:50 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:50 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:50 np0005592159 ceph-mon[77081]: Health check update: 103 slow ops, oldest one blocked for 5397 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:51.193+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:51 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:06:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:51.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:06:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:51.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:52 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:52.174+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:52 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:53.142+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:53 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:53 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:53.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:53.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:54.163+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:54 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:54 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:54 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:55.124+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:55 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:55.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:55.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:55 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:55 np0005592159 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 5402 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:06:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:06:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:56.153+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:56 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:56 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:57.158+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:57 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:06:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:57.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:06:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:57.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:57 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:58 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:58.146+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:58 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:59 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:06:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:06:59.142+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:06:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:06:59.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:06:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:06:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:06:59.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:06:59 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:00 np0005592159 podman[270180]: 2026-01-22 15:07:00.019092588 +0000 UTC m=+0.074681659 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:07:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:00.186+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:00 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:00 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:00 np0005592159 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 5407 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:01.211+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:01 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 18 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:07:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:01.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:01.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:01 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:02.171+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:02 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:02 np0005592159 ceph-mon[77081]: 18 slow requests (by type [ 'delayed' : 18 ] most affected pool [ 'vms' : 11 ])
Jan 22 10:07:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:03.151+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:03 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:03.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:03.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:03 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:04.174+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:04 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:04 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:05.161+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:05 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:05.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:05.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:06.209+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:06 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:06 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:06 np0005592159 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 5412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:07 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:07 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:07.226+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:07 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:07.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:07.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:08 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:08.253+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:08 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:09 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:09.277+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:09 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:09.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:09.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:10 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:10.229+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:10 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:11 np0005592159 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 5417 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:11 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:11.277+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:11 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:11.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:11.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:12 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:12.325+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:12 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:13 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:13.352+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:13 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:13.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:13.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:14 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:14.332+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:14 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:14 np0005592159 podman[270254]: 2026-01-22 15:07:14.83106425 +0000 UTC m=+0.135221158 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 10:07:15 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:15 np0005592159 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 5422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:15.378+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:15 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:15.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:15.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:16 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:16.364+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:16 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:17 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:07:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:17.337+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:17 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.003000078s ======
Jan 22 10:07:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:17.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000078s
Jan 22 10:07:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:17.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:18.353+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:18 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:18 np0005592159 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:07:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/394610157' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:07:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:07:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/394610157' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:07:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:19.341+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:19 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:19 np0005592159 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:19.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:19.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:20.355+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:20 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:20 np0005592159 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:20 np0005592159 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 5427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:21.347+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:21 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:21.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:21.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:21 np0005592159 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:22.327+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:22 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:23.320+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:23 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e4ed6f0 =====
Jan 22 10:07:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:23.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e4ed6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:23 np0005592159 radosgw[80769]: beast: 0x7f935e4ed6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:23.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:23 np0005592159 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:24.312+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:24 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:24 np0005592159 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:24 np0005592159 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:25.290+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:25 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:25 np0005592159 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:25 np0005592159 ceph-mon[77081]: Health check update: 81 slow ops, oldest one blocked for 5432 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:25.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:25.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:25 np0005592159 systemd[1]: Starting dnf makecache...
Jan 22 10:07:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:26 np0005592159 dnf[270337]: Metadata cache refreshed recently.
Jan 22 10:07:26 np0005592159 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 22 10:07:26 np0005592159 systemd[1]: Finished dnf makecache.
Jan 22 10:07:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:26.295+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:26 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 81 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:26 np0005592159 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:27.280+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:27 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:27 np0005592159 ceph-mon[77081]: 81 slow requests (by type [ 'delayed' : 81 ] most affected pool [ 'vms' : 53 ])
Jan 22 10:07:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:27.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:27.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:28.264+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:28 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:28 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:29.309+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:29 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:29.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:07:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:29.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:07:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:30.359+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:30 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:30 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:31 np0005592159 podman[270341]: 2026-01-22 15:07:31.018380831 +0000 UTC m=+0.073279622 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:07:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:31.385+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:31 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:31 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:31 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:31 np0005592159 ceph-mon[77081]: Health check update: 81 slow ops, oldest one blocked for 5437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:31.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:31.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:32.417+0000 7f47f8ed4640 -1 osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:32 np0005592159 ceph-osd[79779]: osd.2 173 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:32 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:32 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e174 e174: 3 total, 3 up, 3 in
Jan 22 10:07:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:33.387+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:33 np0005592159 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:33.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:33.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #187. Immutable memtables: 0.
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.944706) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 187
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453944755, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 1275, "num_deletes": 306, "total_data_size": 2195447, "memory_usage": 2237056, "flush_reason": "Manual Compaction"}
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #188: started
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453954651, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 188, "file_size": 1441669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90384, "largest_seqno": 91654, "table_properties": {"data_size": 1436443, "index_size": 2429, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 14759, "raw_average_key_size": 21, "raw_value_size": 1424591, "raw_average_value_size": 2076, "num_data_blocks": 104, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 306, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094381, "oldest_key_time": 1769094381, "file_creation_time": 1769094453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 9968 microseconds, and 3829 cpu microseconds.
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.954683) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #188: 1441669 bytes OK
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.954697) [db/memtable_list.cc:519] [default] Level-0 commit table #188 started
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.956464) [db/memtable_list.cc:722] [default] Level-0 commit table #188: memtable #1 done
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.956477) EVENT_LOG_v1 {"time_micros": 1769094453956473, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.956492) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 2189000, prev total WAL file size 2189000, number of live WAL files 2.
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000184.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.957124) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [188(1407KB)], [186(9645KB)]
Jan 22 10:07:33 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094453957189, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [188], "files_L6": [186], "score": -1, "input_data_size": 11318256, "oldest_snapshot_seqno": -1}
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #189: 13979 keys, 9685429 bytes, temperature: kUnknown
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454034131, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 189, "file_size": 9685429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9612843, "index_size": 36505, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35013, "raw_key_size": 385337, "raw_average_key_size": 27, "raw_value_size": 9378605, "raw_average_value_size": 670, "num_data_blocks": 1301, "num_entries": 13979, "num_filter_entries": 13979, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 189, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.034404) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 9685429 bytes
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.035371) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.0 rd, 125.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.4 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(14.6) write-amplify(6.7) OK, records in: 14610, records dropped: 631 output_compression: NoCompression
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.035386) EVENT_LOG_v1 {"time_micros": 1769094454035379, "job": 120, "event": "compaction_finished", "compaction_time_micros": 77004, "compaction_time_cpu_micros": 45319, "output_level": 6, "num_output_files": 1, "total_output_size": 9685429, "num_input_records": 14610, "num_output_records": 13979, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454035672, "job": 120, "event": "table_file_deletion", "file_number": 188}
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000186.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094454037112, "job": 120, "event": "table_file_deletion", "file_number": 186}
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:33.957024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.037152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.037156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.037158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.037159) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:07:34.037161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:07:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:34.436+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:34 np0005592159 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:35 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:35.473+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:35 np0005592159 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:35.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:35.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:36 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:07:36 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:36 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 5442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:07:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:36.425+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:36 np0005592159 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:37 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:37.455+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:37 np0005592159 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:37.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:37.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:38.426+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:38 np0005592159 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:38 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:39.410+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:39 np0005592159 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:39 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:39.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:39.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:40.396+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:40 np0005592159 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:40 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:40 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 5447 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e174 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:41.434+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:41 np0005592159 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:41.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000025s ======
Jan 22 10:07:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:41.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 22 10:07:41 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:07:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:42.405+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:42 np0005592159 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:43 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:43.361+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:43 np0005592159 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:43.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:43.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:44.357+0000 7f47f8ed4640 -1 osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:44 np0005592159 ceph-osd[79779]: osd.2 174 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:44 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:44 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e175 e175: 3 total, 3 up, 3 in
Jan 22 10:07:45 np0005592159 podman[270598]: 2026-01-22 15:07:45.078000439 +0000 UTC m=+0.127210089 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:07:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:45.392+0000 7f47f8ed4640 -1 osd.2 175 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:45 np0005592159 ceph-osd[79779]: osd.2 175 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:45.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:45.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:46 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:46 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 5452 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:46.415+0000 7f47f8ed4640 -1 osd.2 175 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:46 np0005592159 ceph-osd[79779]: osd.2 175 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:47 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:07:47.244 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:07:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:07:47.245 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:07:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:07:47.246 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:07:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:47.424+0000 7f47f8ed4640 -1 osd.2 175 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:47 np0005592159 ceph-osd[79779]: osd.2 175 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:47.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:47.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:48 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:48 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e176 e176: 3 total, 3 up, 3 in
Jan 22 10:07:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:48.399+0000 7f47f8ed4640 -1 osd.2 176 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:48 np0005592159 ceph-osd[79779]: osd.2 176 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:49.376+0000 7f47f8ed4640 -1 osd.2 176 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:49 np0005592159 ceph-osd[79779]: osd.2 176 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:49 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:49.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:49.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:50.343+0000 7f47f8ed4640 -1 osd.2 176 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:50 np0005592159 ceph-osd[79779]: osd.2 176 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:50 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e177 e177: 3 total, 3 up, 3 in
Jan 22 10:07:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:51.353+0000 7f47f8ed4640 -1 osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:51 np0005592159 ceph-osd[79779]: osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:51.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:51.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:52 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:52 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 5457 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:52.318+0000 7f47f8ed4640 -1 osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:52 np0005592159 ceph-osd[79779]: osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:53 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:53 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:53.276+0000 7f47f8ed4640 -1 osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:53 np0005592159 ceph-osd[79779]: osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:53.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:53.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:54 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:54.274+0000 7f47f8ed4640 -1 osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:54 np0005592159 ceph-osd[79779]: osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:55.270+0000 7f47f8ed4640 -1 osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:55 np0005592159 ceph-osd[79779]: osd.2 177 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e178 e178: 3 total, 3 up, 3 in
Jan 22 10:07:55 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:55.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:55.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:07:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:56.250+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:56 np0005592159 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 36 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:57 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:57 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 5462 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:07:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:57.259+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:57 np0005592159 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:07:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:07:57.625 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:07:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:07:57.626 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:07:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:57.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:07:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:57.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:07:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:58.245+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:58 np0005592159 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:07:58 np0005592159 ceph-mon[77081]: 36 slow requests (by type [ 'delayed' : 36 ] most affected pool [ 'vms' : 21 ])
Jan 22 10:07:58 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:07:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:07:59.239+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:59 np0005592159 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:07:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:07:59 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:07:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:07:59.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:07:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:07:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:07:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:07:59.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:00.234+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:00 np0005592159 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:01.279+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:01 np0005592159 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:01 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:08:01.628 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:08:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:01.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:01 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:01 np0005592159 ceph-mon[77081]: Health check update: 36 slow ops, oldest one blocked for 5467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:01.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:02 np0005592159 podman[270683]: 2026-01-22 15:08:02.006393475 +0000 UTC m=+0.073260502 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 22 10:08:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:02.239+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:02 np0005592159 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:03 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:03 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:03.238+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:03 np0005592159 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:03.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:03.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:04.208+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:04 np0005592159 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:04 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:05.214+0000 7f47f8ed4640 -1 osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:05 np0005592159 ceph-osd[79779]: osd.2 178 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:05 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:05 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:05.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:05.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 e179: 3 total, 3 up, 3 in
Jan 22 10:08:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:06.202+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:07 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:07 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:07.210+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:07.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:07.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:08.200+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:08 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:09.197+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:09 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:09 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:09.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:09.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:10.168+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #190. Immutable memtables: 0.
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.620593) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 190
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490620655, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 842, "num_deletes": 329, "total_data_size": 1269048, "memory_usage": 1293432, "flush_reason": "Manual Compaction"}
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #191: started
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490634114, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 191, "file_size": 822633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91659, "largest_seqno": 92496, "table_properties": {"data_size": 818649, "index_size": 1507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11369, "raw_average_key_size": 21, "raw_value_size": 809707, "raw_average_value_size": 1502, "num_data_blocks": 65, "num_entries": 539, "num_filter_entries": 539, "num_deletions": 329, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094454, "oldest_key_time": 1769094454, "file_creation_time": 1769094490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 13585 microseconds, and 4315 cpu microseconds.
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.634188) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #191: 822633 bytes OK
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.634207) [db/memtable_list.cc:519] [default] Level-0 commit table #191 started
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.635956) [db/memtable_list.cc:722] [default] Level-0 commit table #191: memtable #1 done
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.635971) EVENT_LOG_v1 {"time_micros": 1769094490635966, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.636017) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 1264241, prev total WAL file size 1264241, number of live WAL files 2.
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000187.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.636912) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034323734' seq:72057594037927935, type:22 .. '6C6F676D0034353331' seq:0, type:0; will stop at (end)
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [191(803KB)], [189(9458KB)]
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490636984, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [191], "files_L6": [189], "score": -1, "input_data_size": 10508062, "oldest_snapshot_seqno": -1}
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #192: 13845 keys, 10339609 bytes, temperature: kUnknown
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490742977, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 192, "file_size": 10339609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10266777, "index_size": 37135, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 34629, "raw_key_size": 383304, "raw_average_key_size": 27, "raw_value_size": 10033513, "raw_average_value_size": 724, "num_data_blocks": 1325, "num_entries": 13845, "num_filter_entries": 13845, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094490, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 192, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.743273) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 10339609 bytes
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.747713) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 99.1 rd, 97.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.2 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(25.3) write-amplify(12.6) OK, records in: 14518, records dropped: 673 output_compression: NoCompression
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.747752) EVENT_LOG_v1 {"time_micros": 1769094490747737, "job": 122, "event": "compaction_finished", "compaction_time_micros": 106056, "compaction_time_cpu_micros": 39521, "output_level": 6, "num_output_files": 1, "total_output_size": 10339609, "num_input_records": 14518, "num_output_records": 13845, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490748051, "job": 122, "event": "table_file_deletion", "file_number": 191}
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000189.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094490749768, "job": 122, "event": "table_file_deletion", "file_number": 189}
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.636801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.749797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.749801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.749803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.749804) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:10 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:08:10.749806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:08:11 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:11 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:11.129+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:11.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:11.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:12.152+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:12 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:13.197+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:13.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:13.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:13 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:13 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:14.195+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:15 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:15.170+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:15.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:15.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:16 np0005592159 podman[270710]: 2026-01-22 15:08:16.064067809 +0000 UTC m=+0.110052722 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:08:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:16.210+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:16 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:16 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:17.227+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:17.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:17.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:18 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:18 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:18.194+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:19.208+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:19 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:19.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:19.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:20.199+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:20 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:20 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:21.162+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:21.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:21.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:22.132+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:22 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:22 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:23.601+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:23.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:23.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:23 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:23 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:24.574+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:25 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:25.538+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:25.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:25.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:26.516+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:27.504+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:27 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:27 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5492 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:27.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:27.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:28.485+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:29 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:29 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:29 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:29.457+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:29.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:29.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:30.481+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:30 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:30 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:31.461+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:31.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:31.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:32 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:32 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:32.486+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:33 np0005592159 podman[270795]: 2026-01-22 15:08:33.032798043 +0000 UTC m=+0.085867831 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:08:33 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:33.494+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:33.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:33.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:34 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:34 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:34.466+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:35 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:35.491+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:35.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:35.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:36.456+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:36 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:36 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:37.498+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:37 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:37.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:37.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:38.526+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:38 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:39.539+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:39.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:39.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:40.515+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:40 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:41.486+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:41.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:41.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:42 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:42 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:42.510+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:43.553+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:43 np0005592159 podman[271140]: 2026-01-22 15:08:43.59185786 +0000 UTC m=+0.045108268 container create e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:08:43 np0005592159 systemd[1]: Started libpod-conmon-e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec.scope.
Jan 22 10:08:43 np0005592159 systemd[1]: Started libcrun container.
Jan 22 10:08:43 np0005592159 podman[271140]: 2026-01-22 15:08:43.568550832 +0000 UTC m=+0.021801270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:08:43 np0005592159 podman[271140]: 2026-01-22 15:08:43.680880533 +0000 UTC m=+0.134130941 container init e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 22 10:08:43 np0005592159 podman[271140]: 2026-01-22 15:08:43.690883083 +0000 UTC m=+0.144133471 container start e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True)
Jan 22 10:08:43 np0005592159 podman[271140]: 2026-01-22 15:08:43.695189956 +0000 UTC m=+0.148440344 container attach e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 22 10:08:43 np0005592159 elastic_bhaskara[271156]: 167 167
Jan 22 10:08:43 np0005592159 systemd[1]: libpod-e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec.scope: Deactivated successfully.
Jan 22 10:08:43 np0005592159 conmon[271156]: conmon e9e50f4d6ab855d42868 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec.scope/container/memory.events
Jan 22 10:08:43 np0005592159 podman[271140]: 2026-01-22 15:08:43.700475624 +0000 UTC m=+0.153726022 container died e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 10:08:43 np0005592159 systemd[1]: var-lib-containers-storage-overlay-ff7d6455c8781e90dcd3729e7d6520ca745d1f80029a715bcb8de0364eef1e50-merged.mount: Deactivated successfully.
Jan 22 10:08:43 np0005592159 podman[271140]: 2026-01-22 15:08:43.74978793 +0000 UTC m=+0.203038308 container remove e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bhaskara, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:08:43 np0005592159 systemd[1]: libpod-conmon-e9e50f4d6ab855d4286869dfc968604648f7823d0362885bfe3f16c7b2ad37ec.scope: Deactivated successfully.
Jan 22 10:08:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:43.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:43 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:43 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:43.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:43 np0005592159 podman[271181]: 2026-01-22 15:08:43.964303086 +0000 UTC m=+0.046751350 container create 160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 22 10:08:44 np0005592159 podman[271181]: 2026-01-22 15:08:43.946292666 +0000 UTC m=+0.028740970 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:08:44 np0005592159 systemd[1]: Started libpod-conmon-160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a.scope.
Jan 22 10:08:44 np0005592159 systemd[1]: Started libcrun container.
Jan 22 10:08:44 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d95da456b65c3adbcd4f27d2c38452e0140f54f87f69bcdd8f5905209153700/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:44 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d95da456b65c3adbcd4f27d2c38452e0140f54f87f69bcdd8f5905209153700/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:44 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d95da456b65c3adbcd4f27d2c38452e0140f54f87f69bcdd8f5905209153700/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:44 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d95da456b65c3adbcd4f27d2c38452e0140f54f87f69bcdd8f5905209153700/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:08:44 np0005592159 podman[271181]: 2026-01-22 15:08:44.12734018 +0000 UTC m=+0.209788494 container init 160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:08:44 np0005592159 podman[271181]: 2026-01-22 15:08:44.135639776 +0000 UTC m=+0.218088050 container start 160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_borg, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 22 10:08:44 np0005592159 podman[271181]: 2026-01-22 15:08:44.139346883 +0000 UTC m=+0.221795207 container attach 160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:08:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:44.516+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:44 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:44 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]: [
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:    {
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:        "available": false,
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:        "ceph_device": false,
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:        "lsm_data": {},
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:        "lvs": [],
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:        "path": "/dev/sr0",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:        "rejected_reasons": [
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "Has a FileSystem",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "Insufficient space (<5GB)"
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:        ],
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:        "sys_api": {
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "actuators": null,
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "device_nodes": "sr0",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "devname": "sr0",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "human_readable_size": "482.00 KB",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "id_bus": "ata",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "model": "QEMU DVD-ROM",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "nr_requests": "2",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "parent": "/dev/sr0",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "partitions": {},
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "path": "/dev/sr0",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "removable": "1",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "rev": "2.5+",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "ro": "0",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "rotational": "1",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "sas_address": "",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "sas_device_handle": "",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "scheduler_mode": "mq-deadline",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "sectors": 0,
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "sectorsize": "2048",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "size": 493568.0,
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "support_discard": "2048",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "type": "disk",
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:            "vendor": "QEMU"
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:        }
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]:    }
Jan 22 10:08:45 np0005592159 inspiring_borg[271197]: ]
Jan 22 10:08:45 np0005592159 systemd[1]: libpod-160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a.scope: Deactivated successfully.
Jan 22 10:08:45 np0005592159 systemd[1]: libpod-160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a.scope: Consumed 1.126s CPU time.
Jan 22 10:08:45 np0005592159 podman[271181]: 2026-01-22 15:08:45.25261767 +0000 UTC m=+1.335065964 container died 160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:08:45 np0005592159 systemd[1]: var-lib-containers-storage-overlay-2d95da456b65c3adbcd4f27d2c38452e0140f54f87f69bcdd8f5905209153700-merged.mount: Deactivated successfully.
Jan 22 10:08:45 np0005592159 podman[271181]: 2026-01-22 15:08:45.307992659 +0000 UTC m=+1.390440923 container remove 160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_borg, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:08:45 np0005592159 systemd[1]: libpod-conmon-160645468ef7f6fa4e5a2edc6f47ad5af8c4a525c22e785e87f2e416f79bae9a.scope: Deactivated successfully.
Jan 22 10:08:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:45.559+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:45.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:45.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:45 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:08:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:08:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:46.570+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:47 np0005592159 podman[272291]: 2026-01-22 15:08:47.058022631 +0000 UTC m=+0.106121607 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 10:08:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:08:47.245 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:08:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:08:47.246 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:08:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:08:47.246 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:08:47 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:47.580+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:47.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:47.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:48 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:48 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:48.570+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:49.615+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:49 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:49.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:49.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:50.655+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:51 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:51 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:51.665+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:51.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:51.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:52 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:52 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:52.684+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:53.705+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:53 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:53.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:53.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:54.716+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:54 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:55.720+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:55.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:08:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:55.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:08:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:08:56 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:56 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:08:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:56.683+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:57.727+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:57 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:57 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:57 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:57.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:57.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:58.750+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:58 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:08:58 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:08:59.787+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:08:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:08:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:08:59.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:08:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:08:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:08:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:08:59.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:00 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:00.762+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:01 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:01 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:01 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:01.773+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:01.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:01.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:02 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:02.766+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:03 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:03.731+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:03.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:03.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:04 np0005592159 podman[272427]: 2026-01-22 15:09:04.014503146 +0000 UTC m=+0.068026091 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:09:04 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:09:04.104 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:09:04 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:09:04.106 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:09:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:04.712+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:05 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:05.723+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:05.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:05.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:06 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:06 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:06.674+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:07 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:07 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:07.645+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:07.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:07.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:08 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:08.673+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:09.680+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:09.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:09.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:10 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:10.716+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:11 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:09:11.108 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:09:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:11 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:11 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:11 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:11.746+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:11.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:11.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:12.737+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:12 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:13.708+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:13 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:13.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:13.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:14.758+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:14 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:15.797+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:15 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:15 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:15.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:15.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:16.779+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:16 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:17.772+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:17.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:17 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:17.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:18 np0005592159 podman[272457]: 2026-01-22 15:09:18.044280946 +0000 UTC m=+0.098583430 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 10:09:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:09:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2298473774' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:09:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:09:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2298473774' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:09:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:18.768+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:18 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:19.729+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:19.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:19 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:19.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:20.733+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:20 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:20 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:21.691+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:21.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:21.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:22 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:22.701+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:23 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:23.670+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:23.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:23.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:24 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:24.641+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 109 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:25 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:25.613+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:25.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:25.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:26.567+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:27 np0005592159 ceph-mon[77081]: 109 slow requests (by type [ 'delayed' : 109 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:09:27 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:27 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:27.594+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:27.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:27.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:28.548+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:28 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:28 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:29.590+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:29 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:29.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:29.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:30.592+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:31 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:31 np0005592159 ceph-mon[77081]: Health check update: 109 slow ops, oldest one blocked for 5558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:31.615+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:31.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:31.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:32 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:32.636+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:33 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:33.644+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:33.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:33.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:34 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:34.655+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:35 np0005592159 podman[272543]: 2026-01-22 15:09:35.026628609 +0000 UTC m=+0.076743679 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 22 10:09:35 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:35.629+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:35.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:35.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:36 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:36 np0005592159 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 5563 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:36.581+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:37 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:37 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:37.555+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:37.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:37.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:38 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:38.595+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:39 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:39.588+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:39.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:40.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:40 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:40.586+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:41 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:41 np0005592159 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 5568 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:41.567+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:41.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:42.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:42 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:42.577+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:43 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:43.567+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:43.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:44.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:44.561+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:44 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:45.609+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:45 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:45 np0005592159 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 5573 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:45.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:46.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:46.569+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:46 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:09:47.247 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:09:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:09:47.247 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:09:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:09:47.247 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:09:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:47.541+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:47 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:09:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:47.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:09:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:48.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:48.509+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:48 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:49 np0005592159 podman[272620]: 2026-01-22 15:09:49.021284281 +0000 UTC m=+0.084596085 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 22 10:09:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:49.526+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:49 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:49.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:50.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:50.492+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:50 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:50 np0005592159 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 5578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:51.470+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:51 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:51.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:52.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:52.484+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:52 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:53.469+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:53.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:54 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:09:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:54.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:09:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:54.450+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 14 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:55 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:55.463+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:55.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:56.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:56 np0005592159 ceph-mon[77081]: 14 slow requests (by type [ 'delayed' : 14 ] most affected pool [ 'vms' : 10 ])
Jan 22 10:09:56 np0005592159 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 5583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:09:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:09:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:56.454+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:57 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:57.494+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:09:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:57.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:09:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:09:58.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:09:58 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:58 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:58.493+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:09:59.524+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:09:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:59 np0005592159 podman[272943]: 2026-01-22 15:09:59.667497963 +0000 UTC m=+0.065315700 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 22 10:09:59 np0005592159 podman[272943]: 2026-01-22 15:09:59.782732167 +0000 UTC m=+0.180549884 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:09:59 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:09:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:09:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:09:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:09:59 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:09:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:09:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:09:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:09:59.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:00.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:00.477+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:00 np0005592159 podman[273099]: 2026-01-22 15:10:00.586391691 +0000 UTC m=+0.078154016 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 10:10:00 np0005592159 podman[273099]: 2026-01-22 15:10:00.598679282 +0000 UTC m=+0.090441557 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 10:10:00 np0005592159 podman[273164]: 2026-01-22 15:10:00.844971775 +0000 UTC m=+0.062133946 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, architecture=x86_64, distribution-scope=public, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, description=keepalived for Ceph, io.buildah.version=1.28.2)
Jan 22 10:10:00 np0005592159 podman[273164]: 2026-01-22 15:10:00.853637392 +0000 UTC m=+0.070799533 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20, architecture=x86_64, com.redhat.component=keepalived-container, release=1793, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, description=keepalived for Ceph, version=2.2.4)
Jan 22 10:10:00 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:00 np0005592159 ceph-mon[77081]: Health detail: HEALTH_WARN 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops
Jan 22 10:10:00 np0005592159 ceph-mon[77081]: [WRN] SLOW_OPS: 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops
Jan 22 10:10:00 np0005592159 ceph-mon[77081]: Health check update: 14 slow ops, oldest one blocked for 5587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:01.443+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:01.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:02.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:02.486+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:03 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:03 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:03 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:10:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:10:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:03.438+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:03.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:04.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:04 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:04.419+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:05 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:10:05.008 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:10:05 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:10:05.009 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:10:05 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:05.409+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:05.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:06.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:06 np0005592159 podman[273381]: 2026-01-22 15:10:06.041843959 +0000 UTC m=+0.086067722 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 10:10:06 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:06 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:06.393+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:07 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:10:07.011 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:10:07 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:07.440+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:07.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:08.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:08 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:08.451+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:09.433+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:09 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:09 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:09 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:10:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:09.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:10.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:10.449+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:10 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:10 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:11.496+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:11.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:12 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:12.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:12.531+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:12 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:13.512+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:13.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:14.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:14 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:14.473+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:15 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:15 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:15.471+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:15.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:16.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:16.470+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:16 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:16 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:17.446+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:17.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:18.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:18 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:18.415+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:19.401+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:19 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:19 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:19.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:20.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:20 np0005592159 podman[273460]: 2026-01-22 15:10:20.081957888 +0000 UTC m=+0.132222580 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 10:10:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:20.446+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:21 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:21.480+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:21 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:21.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 22 10:10:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:22.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 22 10:10:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:22.492+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:22 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:22 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:23.530+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:23 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:23.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:24.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:24.511+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:25 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:25.511+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:10:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:25.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:26.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:26 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:26 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 54 ])
Jan 22 10:10:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:26.493+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #193. Immutable memtables: 0.
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.381659) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 193
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627381697, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2241, "num_deletes": 487, "total_data_size": 4152597, "memory_usage": 4221152, "flush_reason": "Manual Compaction"}
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #194: started
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627398795, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 194, "file_size": 2692767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 92501, "largest_seqno": 94737, "table_properties": {"data_size": 2684212, "index_size": 4536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 26787, "raw_average_key_size": 22, "raw_value_size": 2663684, "raw_average_value_size": 2282, "num_data_blocks": 194, "num_entries": 1167, "num_filter_entries": 1167, "num_deletions": 487, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094490, "oldest_key_time": 1769094490, "file_creation_time": 1769094627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 17168 microseconds, and 6053 cpu microseconds.
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.398831) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #194: 2692767 bytes OK
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.398848) [db/memtable_list.cc:519] [default] Level-0 commit table #194 started
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.402160) [db/memtable_list.cc:722] [default] Level-0 commit table #194: memtable #1 done
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.402196) EVENT_LOG_v1 {"time_micros": 1769094627402171, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.402212) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 4141391, prev total WAL file size 4141655, number of live WAL files 2.
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000190.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.404197) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [194(2629KB)], [192(10097KB)]
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627404249, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [194], "files_L6": [192], "score": -1, "input_data_size": 13032376, "oldest_snapshot_seqno": -1}
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #195: 14021 keys, 11311235 bytes, temperature: kUnknown
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627472953, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 195, "file_size": 11311235, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11236060, "index_size": 39030, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35077, "raw_key_size": 386725, "raw_average_key_size": 27, "raw_value_size": 10998447, "raw_average_value_size": 784, "num_data_blocks": 1405, "num_entries": 14021, "num_filter_entries": 14021, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094627, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 195, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.473240) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 11311235 bytes
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.475166) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.3 rd, 164.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 9.9 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(9.0) write-amplify(4.2) OK, records in: 15012, records dropped: 991 output_compression: NoCompression
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.475187) EVENT_LOG_v1 {"time_micros": 1769094627475177, "job": 124, "event": "compaction_finished", "compaction_time_micros": 68836, "compaction_time_cpu_micros": 26855, "output_level": 6, "num_output_files": 1, "total_output_size": 11311235, "num_input_records": 15012, "num_output_records": 14021, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627475843, "job": 124, "event": "table_file_deletion", "file_number": 194}
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000192.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094627477798, "job": 124, "event": "table_file_deletion", "file_number": 192}
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.404144) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.477936) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.477941) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.477943) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.477944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:10:27.477946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:10:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:27.528+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:27.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:28.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:28 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:28.577+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:29 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:29.552+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:29.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:30.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:30.550+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:30 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:31.511+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:31 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:31 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:31.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:32.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:32.554+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:32 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:33.604+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:10:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:33.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:10:34 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:34.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:34.586+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:35 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:35.613+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:35.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:36 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:36 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:36.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:36.613+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:36 np0005592159 podman[273546]: 2026-01-22 15:10:36.989928205 +0000 UTC m=+0.052060883 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:10:37 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:37.659+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:37.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:38.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:38 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:38 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:38.658+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:39 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:39.679+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:39.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:40.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:40 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:40.676+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:41 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:41 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:41.703+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:42.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:42.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:42.714+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:42 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:43.702+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:44.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:44.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:44 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:44.679+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:45.652+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:46.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:46.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:46.693+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:46 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:46 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:10:47.247 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:10:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:10:47.248 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:10:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:10:47.248 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:10:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:47.657+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:48.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:48.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:48 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:48 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:48 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:48.633+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:49 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:49.649+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:50.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:50.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:50 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:50.607+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:51 np0005592159 podman[273624]: 2026-01-22 15:10:51.003736365 +0000 UTC m=+0.069393766 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:10:51 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:51.612+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:52.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:52.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:52 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:52 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:52.568+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:53 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:53 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:53.578+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:54.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:54.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:54 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:54.610+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:55 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:55.590+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:56.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:56.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:56 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:56.563+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:10:57 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:57 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:10:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:57.605+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:10:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:10:58.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:10:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:10:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:10:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:10:58.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:10:58 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:58.650+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:59 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:10:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:10:59.617+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:10:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:00.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:00.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:00.570+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:00 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:01.620+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:02.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:02.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:02 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:02.576+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:03.576+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:04.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:04.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:04.611+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:05 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:05 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:05 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:05.618+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:05 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:11:05.832 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:11:05 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:11:05.834 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:11:05 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:11:05.834 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:11:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:06.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:06 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:11:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:06.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:11:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:06.601+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:07 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:07 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:07.641+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:08 np0005592159 podman[273708]: 2026-01-22 15:11:08.009221195 +0000 UTC m=+0.070555104 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:11:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:08.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:08.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:08 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:08 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:08.691+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:09 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:09.690+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:10.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:10.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:10 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:11:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:11:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:11:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:10.708+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:11.757+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:11 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:12.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:12.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:12.751+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:12 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:12 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:13.718+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:14 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:14.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:14.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:14.709+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:15 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:15.751+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:11:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:16.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:11:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:16.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:16 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:16 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:16.727+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:11:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:11:17 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:17 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:17.757+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:11:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:18.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:11:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:18.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:18 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:18.802+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:19 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:19.810+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:20.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:20.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:20.792+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:20 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:21.767+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:21 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:22 np0005592159 podman[273915]: 2026-01-22 15:11:22.016648433 +0000 UTC m=+0.079936400 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 10:11:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:22.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:22.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:22.793+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:23 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:23 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:23.820+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:24.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:24.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:24.824+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:25 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:25 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:25.791+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:26.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:26.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:26.772+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:27 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:27 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:27.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:28.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:28.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:28 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:28 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:28.706+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:29.716+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:29 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:30.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:30.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:30.704+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:30 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:31.660+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:32.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:32.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:32 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:32.616+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:32 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:32 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:33 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:33 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:33.569+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:34.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:34.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:34 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:34.560+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:35 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:35.531+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:36.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:36.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:36.499+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:36 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:37 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:37.538+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:37 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:37 np0005592159 ceph-mon[77081]: Health check update: 110 slow ops, oldest one blocked for 5688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:38.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:38.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:38.582+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:38 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:11:38 np0005592159 podman[274001]: 2026-01-22 15:11:38.984979017 +0000 UTC m=+0.050031558 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:11:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:39.590+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:40.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:40.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:40 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:40.574+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:41.612+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:41 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:41 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:42.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:42.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:42 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:42.595+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:43 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:43.569+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:44.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:44 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:44.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:44.621+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:45 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:45.601+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:46.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:46 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:46.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:46.555+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:47 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:47 np0005592159 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 5698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:11:47.249 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:11:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:11:47.250 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:11:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:11:47.251 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:11:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:47.605+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:48.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:48.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:48 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:48.567+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:49 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:49 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:49.541+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:50.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:50.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:50 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:50.550+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:51 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:51.593+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:52.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:52.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:52 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:52.627+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:53 np0005592159 podman[274077]: 2026-01-22 15:11:53.022103708 +0000 UTC m=+0.084620552 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 22 10:11:53 np0005592159 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 5703 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:11:53 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:53.666+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:54.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:54.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:54.686+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:54 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:11:55.454 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:11:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:11:55.455 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:11:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:55.665+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:56.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:56.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:56.624+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:57 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:11:57.457 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:11:57 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:11:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:57.640+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:11:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:11:58.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:11:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:11:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:11:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:11:58.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:11:58 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:58 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:58.644+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:59 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:59 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:11:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:11:59.677+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:11:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:00.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:00.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:00.689+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:01 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:01.710+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:02.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:02.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:02 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:02 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:02 np0005592159 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 5708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:02.736+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:03.731+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:04.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:04.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:04.715+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:05 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:05.669+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:06.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:06.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:06.626+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:06 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:06 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:07 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:07.594+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:08.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:08.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:08 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:08 np0005592159 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 5713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:08.563+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:09 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:09 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:12:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:09.561+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:10 np0005592159 podman[274161]: 2026-01-22 15:12:10.042865914 +0000 UTC m=+0.088725529 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:12:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:10.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:10.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:10.537+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:10 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:11.503+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:11 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:11 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:12.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:12.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:12.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:12 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:13 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:13 np0005592159 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 5718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:13.554+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:14.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:14.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:14.576+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:14 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:14 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:15.597+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:16.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:16.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:16 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:16.617+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:17 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:17 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:17 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:17 np0005592159 ceph-mon[77081]: Health check update: 97 slow ops, oldest one blocked for 5728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:17.616+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:18.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:18.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:18.638+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:19 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:12:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:12:19 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:12:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:19.632+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:20.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:20.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:20 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:20.657+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:21.675+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:22.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:22.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:22 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:22.667+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:22 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:22 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:23.662+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:24 np0005592159 podman[274367]: 2026-01-22 15:12:24.091611682 +0000 UTC m=+0.145775849 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 22 10:12:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:24.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:24.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:24.667+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:24 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:24 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:24 np0005592159 ceph-mon[77081]: Health check update: 97 slow ops, oldest one blocked for 5733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:25.633+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:25 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:25 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:26.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:26.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:26.661+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 97 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:27 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:27.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:28.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:28.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:28 np0005592159 ceph-mon[77081]: 97 slow requests (by type [ 'delayed' : 97 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:12:28 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:28.646+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:29.643+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:30 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:30.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:30.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:30.615+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:31.620+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:32.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:32.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:32 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:32 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:32.572+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:32 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:33 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:33 np0005592159 ceph-mon[77081]: Health check update: 97 slow ops, oldest one blocked for 5738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:33 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:12:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:12:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:33.555+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:34.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:12:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:34.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:12:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:34.559+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:34 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:35.580+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:35 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:36.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:36.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:36.579+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:37 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:37 np0005592159 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 5748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:37.592+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:38.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:38.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:38.575+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:38 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:39.609+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:40.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:40.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:40 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:40 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:40.591+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:41 np0005592159 podman[274452]: 2026-01-22 15:12:41.010403656 +0000 UTC m=+0.064493806 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:12:41 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:41 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:41.572+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:42.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:42.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:42.579+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:43 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:43.622+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:43 np0005592159 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 5753 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:43 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:44.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:44.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:44.606+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:45 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:45.640+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:46 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:46.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:46.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:46.612+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:47 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:12:47.249 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:12:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:12:47.250 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:12:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:12:47.250 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:12:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:47.603+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:48 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:48.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:48.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:48.630+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:49 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:49.603+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:50 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:50.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:50.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:50.587+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:51 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:51.570+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:52.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:52.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:52.574+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:53.570+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:53 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:53 np0005592159 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 5758 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:54.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:54.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:54.557+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:55 np0005592159 podman[274529]: 2026-01-22 15:12:55.033260068 +0000 UTC m=+0.097339783 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:12:55 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:55 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:55 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:55 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:55.543+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:56.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:56.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:56.523+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:56 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:12:57.169 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:12:57 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:12:57.170 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:12:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:12:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:57.476+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:12:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:12:58.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:12:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:12:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:12:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:12:58.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:12:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:58.461+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:12:58 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 33 ])
Jan 22 10:12:58 np0005592159 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 5768 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:12:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:12:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:12:59.508+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:12:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:12:59 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:12:59 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:00.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:00.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:00.512+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:00 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:01.515+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:01 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:02.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:02.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:02.496+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:02 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:02 np0005592159 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5773 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:03.520+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:03 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:13:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:04.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:13:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:04.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:04.494+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:05 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:05.450+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:06 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:06.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:06.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:06.486+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:07 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:07 np0005592159 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5778 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:07 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:13:07.172 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:13:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:07.469+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:08.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:08.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:08.511+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:09 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:09.546+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:09 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:09 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:10.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:10.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:10.589+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:10 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:11.606+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:12 np0005592159 podman[274614]: 2026-01-22 15:13:12.010007612 +0000 UTC m=+0.065979585 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:13:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:12.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:12.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:12 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:12.655+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:13 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:13 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:13.665+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:14.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:14.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:14 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:14.713+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:15 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:15.732+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:16.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:16.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:16.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:17 np0005592159 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:17 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:17.734+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:18 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:18.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:18.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:13:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1492018018' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:13:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:13:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1492018018' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:13:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:18.738+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:19 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:19.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:20.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:20.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:20.812+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:20 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:21.810+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #196. Immutable memtables: 0.
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.099542) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 196
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802099617, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 2603, "num_deletes": 544, "total_data_size": 4781260, "memory_usage": 4862176, "flush_reason": "Manual Compaction"}
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #197: started
Jan 22 10:13:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:22.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802278147, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 197, "file_size": 3125853, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 94743, "largest_seqno": 97340, "table_properties": {"data_size": 3116184, "index_size": 5202, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 30539, "raw_average_key_size": 22, "raw_value_size": 3092778, "raw_average_value_size": 2316, "num_data_blocks": 224, "num_entries": 1335, "num_filter_entries": 1335, "num_deletions": 544, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094627, "oldest_key_time": 1769094627, "file_creation_time": 1769094802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 178665 microseconds, and 8228 cpu microseconds.
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.278215) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #197: 3125853 bytes OK
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.278241) [db/memtable_list.cc:519] [default] Level-0 commit table #197 started
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.290057) [db/memtable_list.cc:722] [default] Level-0 commit table #197: memtable #1 done
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.290082) EVENT_LOG_v1 {"time_micros": 1769094802290075, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.290107) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 4768375, prev total WAL file size 4768375, number of live WAL files 2.
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000193.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.292159) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034353330' seq:72057594037927935, type:22 .. '6C6F676D0034373834' seq:0, type:0; will stop at (end)
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [197(3052KB)], [195(10MB)]
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802292221, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [197], "files_L6": [195], "score": -1, "input_data_size": 14437088, "oldest_snapshot_seqno": -1}
Jan 22 10:13:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:13:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:22.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #198: 14255 keys, 14224353 bytes, temperature: kUnknown
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802413145, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 198, "file_size": 14224353, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14144500, "index_size": 43132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35653, "raw_key_size": 391429, "raw_average_key_size": 27, "raw_value_size": 13899986, "raw_average_value_size": 975, "num_data_blocks": 1579, "num_entries": 14255, "num_filter_entries": 14255, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094802, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 198, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.413414) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 14224353 bytes
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.414755) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.3 rd, 117.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.8 +0.0 blob) out(13.6 +0.0 blob), read-write-amplify(9.2) write-amplify(4.6) OK, records in: 15356, records dropped: 1101 output_compression: NoCompression
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.414771) EVENT_LOG_v1 {"time_micros": 1769094802414763, "job": 126, "event": "compaction_finished", "compaction_time_micros": 120999, "compaction_time_cpu_micros": 38784, "output_level": 6, "num_output_files": 1, "total_output_size": 14224353, "num_input_records": 15356, "num_output_records": 14255, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802415637, "job": 126, "event": "table_file_deletion", "file_number": 197}
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000195.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094802417545, "job": 126, "event": "table_file_deletion", "file_number": 195}
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.291956) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.417683) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.417693) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.417696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.417699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:22.417702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:22.778+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:23 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:23 np0005592159 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5793 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:23.785+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:24.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:24.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:24 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:24.793+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:25 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:25 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:25.807+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:26.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:26 np0005592159 podman[274690]: 2026-01-22 15:13:26.187300704 +0000 UTC m=+0.069853486 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:13:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:26.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:26 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:26.804+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:27 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:27 np0005592159 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:27.783+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:28.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:28.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:28 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:28.743+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:29 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:29.706+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:30.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:30.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:30 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:30.718+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:31 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:31.677+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:32.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:32.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:32.676+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:32 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:32 np0005592159 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:33.655+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:33 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:34.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:34.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #199. Immutable memtables: 0.
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.672695) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 199
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814672784, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 436, "num_deletes": 274, "total_data_size": 343571, "memory_usage": 353000, "flush_reason": "Manual Compaction"}
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #200: started
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814691761, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 200, "file_size": 224727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 97345, "largest_seqno": 97776, "table_properties": {"data_size": 222373, "index_size": 389, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6751, "raw_average_key_size": 19, "raw_value_size": 217350, "raw_average_value_size": 635, "num_data_blocks": 17, "num_entries": 342, "num_filter_entries": 342, "num_deletions": 274, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094803, "oldest_key_time": 1769094803, "file_creation_time": 1769094814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 19119 microseconds, and 2280 cpu microseconds.
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:13:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.691824) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #200: 224727 bytes OK
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.691849) [db/memtable_list.cc:519] [default] Level-0 commit table #200 started
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.698102) [db/memtable_list.cc:722] [default] Level-0 commit table #200: memtable #1 done
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.698130) EVENT_LOG_v1 {"time_micros": 1769094814698122, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.698154) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 340725, prev total WAL file size 340725, number of live WAL files 2.
Jan 22 10:13:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:34.696+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000196.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.699019) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [200(219KB)], [198(13MB)]
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814699073, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [200], "files_L6": [198], "score": -1, "input_data_size": 14449080, "oldest_snapshot_seqno": -1}
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #201: 14039 keys, 12781472 bytes, temperature: kUnknown
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814855547, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 201, "file_size": 12781472, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12703981, "index_size": 41282, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35141, "raw_key_size": 387516, "raw_average_key_size": 27, "raw_value_size": 12463868, "raw_average_value_size": 887, "num_data_blocks": 1497, "num_entries": 14039, "num_filter_entries": 14039, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094814, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 201, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.855770) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 12781472 bytes
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.857271) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 92.3 rd, 81.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 13.6 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(121.2) write-amplify(56.9) OK, records in: 14597, records dropped: 558 output_compression: NoCompression
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.857287) EVENT_LOG_v1 {"time_micros": 1769094814857280, "job": 128, "event": "compaction_finished", "compaction_time_micros": 156534, "compaction_time_cpu_micros": 60105, "output_level": 6, "num_output_files": 1, "total_output_size": 12781472, "num_input_records": 14597, "num_output_records": 14039, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814857437, "job": 128, "event": "table_file_deletion", "file_number": 200}
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000198.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094814859811, "job": 128, "event": "table_file_deletion", "file_number": 198}
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.698917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.859842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.859845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.859846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.859848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:13:34.859849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:13:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:13:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:35.686+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:35 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:36.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:36.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:36.706+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:36 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:37.695+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:38 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:38 np0005592159 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:38.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:38.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:38.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:39 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:39.665+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:40.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:40 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:40.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:40.666+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:41 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:41 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:13:41 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:13:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:41.681+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:42.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:42 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:42.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:42.682+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:43 np0005592159 podman[274906]: 2026-01-22 15:13:43.000353585 +0000 UTC m=+0.054188657 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 22 10:13:43 np0005592159 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:43 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:43.701+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:44.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:44 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:13:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:44.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:13:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:44.694+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:45 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:45.691+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:46.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:46.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:46 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:46.729+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:13:47.251 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:13:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:13:47.251 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:13:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:13:47.251 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:13:47 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:47.714+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:48.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:48.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:48 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:48.732+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:49 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:49.739+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:50.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:50.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:50 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:50.729+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:51 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:51.731+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:52.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:13:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:52.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:13:52 np0005592159 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5823 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:13:52 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:52.687+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:53 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:53.717+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:54.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:54.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:54.695+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:13:54 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:55.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:56.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:56.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:56.665+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:57 np0005592159 podman[274981]: 2026-01-22 15:13:57.069229787 +0000 UTC m=+0.123032505 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:13:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:57.688+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:13:58.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:13:58.306 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:13:58 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:13:58.307 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:13:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:13:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:13:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:13:58.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:13:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:58.641+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:58 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:13:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:13:59.687+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:13:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:00.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:00.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:00.674+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:00 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:00 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:00 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:00 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:00 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:14:01 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2969206743' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:14:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:14:01 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2969206743' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:14:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:01.636+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:01 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:02.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:02.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:02.649+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:02 np0005592159 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5833 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:02 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:03 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:14:03.309 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:14:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:03.696+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:03 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:04.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:04.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:04.659+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:05 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:05.617+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:06 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:06.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:06.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:06.616+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:07 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:07 np0005592159 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5838 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:07.578+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:08.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:08.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:08 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:08.579+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:09 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:09.549+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:10.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:10.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:10.501+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:11.483+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:11 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:11 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:12.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:12.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:12.444+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:13.472+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:13 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:13 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:13 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:13 np0005592159 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5843 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:13 np0005592159 podman[275067]: 2026-01-22 15:14:13.991554463 +0000 UTC m=+0.053705474 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 10:14:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:14.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:14.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:14.438+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 118 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:14 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:15.408+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:16.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:16.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:16.403+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:17 np0005592159 ceph-mon[77081]: 118 slow requests (by type [ 'delayed' : 118 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:17.445+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:17 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:18.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:18.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:18.460+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:14:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/35882197' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:14:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:14:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/35882197' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:14:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:19 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:19.464+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:20.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:20.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:20.430+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:20 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:21 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:21.430+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:22.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:22.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:22.410+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:22 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:22 np0005592159 ceph-mon[77081]: Health check update: 118 slow ops, oldest one blocked for 5847 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:23.449+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:23 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:24.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:24.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:24.476+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:24 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:25.447+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:14:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:26.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:14:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:26.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:26.421+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:26 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:27.376+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:27 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:27 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:27 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 5857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:28 np0005592159 podman[275143]: 2026-01-22 15:14:28.047881605 +0000 UTC m=+0.099729156 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 22 10:14:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:28.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:28.396+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:28.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:29.423+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:30.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:30.414+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:30.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:31.386+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:31 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:32.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:32.351+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:32.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:33 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:33 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 5862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:33.369+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:34.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:34.379+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:34.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:35.351+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:35 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:36.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:36.358+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:36.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:36 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:37.318+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:37 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:38.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:38.359+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:38.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:38 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:39.309+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:40.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:40.276+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:40.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:40 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:41 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:41.272+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:42.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:42.253+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:42.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:42 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:42 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 5867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:43.261+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:44.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:44 np0005592159 podman[275333]: 2026-01-22 15:14:44.260806197 +0000 UTC m=+0.056900317 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 22 10:14:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:44.271+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 2 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:14:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:44.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:44 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:14:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:14:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:14:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:45.322+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:45 np0005592159 ceph-mon[77081]: 2 slow requests (by type [ 'delayed' : 2 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:14:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:46.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:46.342+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:46.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:46 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #202. Immutable memtables: 0.
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.209794) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 202
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887209867, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 1232, "num_deletes": 369, "total_data_size": 1953270, "memory_usage": 1977888, "flush_reason": "Manual Compaction"}
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #203: started
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887219013, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 203, "file_size": 847476, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 97781, "largest_seqno": 99008, "table_properties": {"data_size": 843078, "index_size": 1601, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15841, "raw_average_key_size": 23, "raw_value_size": 832092, "raw_average_value_size": 1209, "num_data_blocks": 69, "num_entries": 688, "num_filter_entries": 688, "num_deletions": 369, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094814, "oldest_key_time": 1769094814, "file_creation_time": 1769094887, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 9263 microseconds, and 3918 cpu microseconds.
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.219064) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #203: 847476 bytes OK
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.219087) [db/memtable_list.cc:519] [default] Level-0 commit table #203 started
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.220935) [db/memtable_list.cc:722] [default] Level-0 commit table #203: memtable #1 done
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.220964) EVENT_LOG_v1 {"time_micros": 1769094887220957, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.220985) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 1946701, prev total WAL file size 1946701, number of live WAL files 2.
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000199.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.221696) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373537' seq:72057594037927935, type:22 .. '6D6772737461740033303038' seq:0, type:0; will stop at (end)
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [203(827KB)], [201(12MB)]
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887221742, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [203], "files_L6": [201], "score": -1, "input_data_size": 13628948, "oldest_snapshot_seqno": -1}
Jan 22 10:14:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:14:47.251 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:14:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:14:47.252 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:14:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:14:47.252 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:14:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:47.324+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #204: 14004 keys, 10131995 bytes, temperature: kUnknown
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887451555, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 204, "file_size": 10131995, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10058508, "index_size": 37342, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35077, "raw_key_size": 386587, "raw_average_key_size": 27, "raw_value_size": 9823000, "raw_average_value_size": 701, "num_data_blocks": 1335, "num_entries": 14004, "num_filter_entries": 14004, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769094887, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 204, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.451855) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 10131995 bytes
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.455552) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 59.3 rd, 44.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(28.0) write-amplify(12.0) OK, records in: 14727, records dropped: 723 output_compression: NoCompression
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.455582) EVENT_LOG_v1 {"time_micros": 1769094887455570, "job": 130, "event": "compaction_finished", "compaction_time_micros": 229890, "compaction_time_cpu_micros": 38006, "output_level": 6, "num_output_files": 1, "total_output_size": 10131995, "num_input_records": 14727, "num_output_records": 14004, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887456248, "job": 130, "event": "table_file_deletion", "file_number": 203}
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000201.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769094887459351, "job": 130, "event": "table_file_deletion", "file_number": 201}
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.221641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.459421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.459426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.459428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.459429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:14:47.459431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:47 np0005592159 ceph-mon[77081]: Health check update: 2 slow ops, oldest one blocked for 5877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:48.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:48.302+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:48.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:48 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:49.304+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:50.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:50.265+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:50 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:14:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:14:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:50.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:51 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:51.311+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:52 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:52 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:52.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:52.314+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:52.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:53.350+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:54.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:54.306+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:14:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:54.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:14:54 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:54 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:54 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:55.345+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:56 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:56.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:56.378+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:56.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:14:57 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:57 np0005592159 ceph-mon[77081]: Health check update: 122 slow ops, oldest one blocked for 5887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:14:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:57.423+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:14:58.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:58.387+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:14:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:14:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:14:58.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:14:58 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:14:59 np0005592159 podman[275438]: 2026-01-22 15:14:59.08265192 +0000 UTC m=+0.134034212 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 10:14:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:14:59.385+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:14:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:00 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:00 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:00.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:00.397+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:00.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:01.370+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:01 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:02 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:02.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:02.369+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:02.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:03 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:03 np0005592159 ceph-mon[77081]: Health check update: 122 slow ops, oldest one blocked for 5892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:03.404+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:04.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:04.396+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:04.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:04 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:04 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:05.412+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:05 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:06.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:06.438+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:06.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:06 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:15:06.537 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:15:06 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:15:06.539 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:15:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:06 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:07.405+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:07 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:08.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:08.443+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:08.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:08 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:15:08.541 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:15:08 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:09.410+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:09 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:10.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:10.422+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:10.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:10 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:11.394+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:11 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:12.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:12.411+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 122 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:12.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:12 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:12 np0005592159 ceph-mon[77081]: Health check update: 122 slow ops, oldest one blocked for 5902 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:13.436+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:13 np0005592159 ceph-mon[77081]: 122 slow requests (by type [ 'delayed' : 122 ] most affected pool [ 'vms' : 81 ])
Jan 22 10:15:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:14.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:14.421+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:14.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:15 np0005592159 podman[275522]: 2026-01-22 15:15:15.031745041 +0000 UTC m=+0.085030202 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 22 10:15:15 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:15.406+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:16.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:16 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:16 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:16.454+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:16.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:17 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:17 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 5907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:17.473+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:18.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:18 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:18.428+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:18.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:19 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:19.423+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:20.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:20.417+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:20.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:21.416+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:22 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:22.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:22.446+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:22.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:22 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:23.398+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:24.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:24.373+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:25.377+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:26 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:26 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:26 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:26 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 5912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 22 10:15:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:26.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 22 10:15:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:26.343+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:26.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:27.362+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:28 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:28 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:28 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:28.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:28.405+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:28.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:28 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:28 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:29 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:15:29 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 18K writes, 99K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.03 MB/s#012Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.17 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1768 writes, 10K keys, 1768 commit groups, 1.0 writes per commit group, ingest: 16.40 MB, 0.03 MB/s#012Interval WAL: 1768 writes, 1768 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     76.6      1.39              0.39        65    0.021       0      0       0.0       0.0#012  L6      1/0    9.66 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.8    129.6    112.0      5.51              2.12        64    0.086    652K    35K       0.0       0.0#012 Sum      1/0    9.66 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.8    103.4    104.8      6.90              2.51       129    0.054    652K    35K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.6     77.3     75.7      1.12              0.34        14    0.080    103K   5155       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    129.6    112.0      5.51              2.12        64    0.086    652K    35K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     76.8      1.39              0.39        64    0.022       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.104, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.71 GB write, 0.12 MB/s write, 0.70 GB read, 0.12 MB/s read, 6.9 seconds#012Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.08 GB read, 0.14 MB/s read, 1.1 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 75.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.00043 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3950,71.45 MB,23.5028%) FilterBlock(129,1.77 MB,0.581977%) IndexBlock(129,2.23 MB,0.735037%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:15:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:29.449+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:30 np0005592159 podman[275598]: 2026-01-22 15:15:30.107653225 +0000 UTC m=+0.152028493 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:15:30 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:30.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:30.424+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:30.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:31.424+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:32.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:32.422+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:32 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:32 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:32 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:32 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 5917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:32.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:33.455+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:34 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:34.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:34.434+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:15:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:34.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:15:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:35.413+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:35 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:36.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:36.452+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:36.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:37.478+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:38.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:38.511+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:38.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:39.472+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:39 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:39 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:39 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:39 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 5922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:39 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:40.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:40.457+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:40 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:40 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:40.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:41.505+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:41 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:42.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:42.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:42.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 3 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:15:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:43 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:43 np0005592159 ceph-mon[77081]: Health check update: 3 slow ops, oldest one blocked for 5932 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:43.567+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:44.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:44.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:44.533+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:44 np0005592159 ceph-mon[77081]: 3 slow requests (by type [ 'delayed' : 3 ] most affected pool [ 'vms' : 2 ])
Jan 22 10:15:44 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:45.494+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:46 np0005592159 podman[275681]: 2026-01-22 15:15:46.043277415 +0000 UTC m=+0.087702989 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 22 10:15:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:46.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:46 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:46.486+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:46.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:15:47.252 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:15:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:15:47.253 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:15:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:15:47.253 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:15:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:47.539+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:48 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:48 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:48 np0005592159 ceph-mon[77081]: Health check update: 123 slow ops, oldest one blocked for 5938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:48.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:48.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:48.537+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:48 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:49.559+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:49 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:50.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:50.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:50.568+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:50 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:51.604+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:51 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:15:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:52.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:15:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:52.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:15:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:52.588+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:53 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:15:53 np0005592159 ceph-mon[77081]: Health check update: 123 slow ops, oldest one blocked for 5943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:15:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:15:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:15:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:53.547+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:54 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:54.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:54.523+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:54.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:55 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:55.517+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:56.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:56.476+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:56.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:56 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:57.453+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:57 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:57 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:57 np0005592159 ceph-mon[77081]: Health check update: 123 slow ops, oldest one blocked for 5948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:15:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:15:58.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:58.447+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 123 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:15:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:15:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:15:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:15:58.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:15:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:15:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:15:59.468+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:15:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:15:59 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:16:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:00.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:00.484+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:16:00 np0005592159 podman[275864]: 2026-01-22 15:16:00.522247514 +0000 UTC m=+0.115709322 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 22 10:16:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:00.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:01 np0005592159 ceph-mon[77081]: 123 slow requests (by type [ 'delayed' : 123 ] most affected pool [ 'vms' : 82 ])
Jan 22 10:16:01 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:16:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:16:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:16:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:01.435+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 110 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:16:02 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:16:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:02.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:02.414+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:02.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:03 np0005592159 ceph-mon[77081]: 110 slow requests (by type [ 'delayed' : 110 ] most affected pool [ 'vms' : 74 ])
Jan 22 10:16:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:03.422+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:04.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:04 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:04.440+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:04.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:05.457+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:05 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:05 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:06.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:06 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:06.505+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:06.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:07.493+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:07 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 5957 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:07 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:08.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:08.502+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:08.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:09 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:09.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:10 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:10.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:10.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:10.566+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:11 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:11.588+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:12.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:12 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:12 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:12.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:12.574+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:13 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 5963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:13 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:13.531+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:14.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:14 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:14.561+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:14.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:15 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:15.565+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:16.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:16.557+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:16:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:16.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:16:16 np0005592159 podman[275975]: 2026-01-22 15:16:16.996122058 +0000 UTC m=+0.058274397 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:16:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:17.554+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:17 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:18.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:16:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2350512641' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:16:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:16:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2350512641' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:16:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:18.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:18.583+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:18 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:18 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:19.632+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:20.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:20 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:20.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:20.607+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:21 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:21 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:21.569+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:22.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:22.543+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:16:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:22.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:16:23 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 5973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:23 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:23.513+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:24 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:24.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:24 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:16:24.410 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:16:24 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:16:24.411 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:16:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:24.496+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:24.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:25 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:16:25.413 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:16:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:25.508+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:25 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:26.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:26.487+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:26.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:26 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:26 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:27.460+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:27 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:27 np0005592159 ceph-mon[77081]: Health check update: 4 slow ops, oldest one blocked for 5978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:28.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:28.434+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:28.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:29 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:29.416+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:30.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:30.425+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:31.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:31 np0005592159 podman[276051]: 2026-01-22 15:16:31.059831106 +0000 UTC m=+0.110733601 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 10:16:31 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:16:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.5 total, 600.0 interval#012Cumulative writes: 12K writes, 41K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 12K writes, 4221 syncs, 3.05 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 715 writes, 1413 keys, 715 commit groups, 1.0 writes per commit group, ingest: 0.64 MB, 0.00 MB/s#012Interval WAL: 715 writes, 337 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.15              0.00         1    0.145       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x557358da5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.5 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Jan 22 10:16:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:31.407+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 4 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:16:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:32 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:32 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:32.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:32.397+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:33.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:33 np0005592159 ceph-mon[77081]: 4 slow requests (by type [ 'delayed' : 4 ] most affected pool [ 'vms' : 3 ])
Jan 22 10:16:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:33.365+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:34.331+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:34.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:34 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:16:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:35.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:16:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:35.292+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:35 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:35 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:36.270+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:36 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:36.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:37.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:37.241+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:37 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:37 np0005592159 ceph-mon[77081]: Health check update: 116 slow ops, oldest one blocked for 5987 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:37 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:38.221+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 54 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:16:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:38.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:39.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:39.190+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:39 np0005592159 ceph-mon[77081]: 54 slow requests (by type [ 'delayed' : 54 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:16:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:40.212+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:40.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:40 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:40 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:41.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:41.220+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:42 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:42.221+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:42.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:43.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:43.178+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:43 np0005592159 ceph-mon[77081]: Health check update: 116 slow ops, oldest one blocked for 5992 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:43 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:44.197+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:44.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:45.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:45.160+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:45 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:45 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #205. Immutable memtables: 0.
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.099518) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 205
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095006099588, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 1831, "num_deletes": 446, "total_data_size": 3293096, "memory_usage": 3349456, "flush_reason": "Manual Compaction"}
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #206: started
Jan 22 10:16:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:46.178+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:46.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095006768728, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 206, "file_size": 2139298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 99013, "largest_seqno": 100839, "table_properties": {"data_size": 2132203, "index_size": 3524, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 22351, "raw_average_key_size": 22, "raw_value_size": 2115184, "raw_average_value_size": 2149, "num_data_blocks": 153, "num_entries": 984, "num_filter_entries": 984, "num_deletions": 446, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769094887, "oldest_key_time": 1769094887, "file_creation_time": 1769095006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 669407 microseconds, and 11616 cpu microseconds.
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.768926) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #206: 2139298 bytes OK
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.769004) [db/memtable_list.cc:519] [default] Level-0 commit table #206 started
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.996427) [db/memtable_list.cc:722] [default] Level-0 commit table #206: memtable #1 done
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.996483) EVENT_LOG_v1 {"time_micros": 1769095006996470, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.996516) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 3283757, prev total WAL file size 3332115, number of live WAL files 2.
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000202.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.998457) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [206(2089KB)], [204(9894KB)]
Jan 22 10:16:46 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095006998504, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [206], "files_L6": [204], "score": -1, "input_data_size": 12271293, "oldest_snapshot_seqno": -1}
Jan 22 10:16:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:47.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:47.136+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:16:47.253 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:16:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:16:47.253 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:16:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:16:47.253 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #207: 14083 keys, 10401068 bytes, temperature: kUnknown
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095007809972, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 207, "file_size": 10401068, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10326606, "index_size": 38125, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35269, "raw_key_size": 388166, "raw_average_key_size": 27, "raw_value_size": 10089301, "raw_average_value_size": 716, "num_data_blocks": 1367, "num_entries": 14083, "num_filter_entries": 14083, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095006, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 207, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.810231) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 10401068 bytes
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.979638) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 15.1 rd, 12.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 9.7 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 14988, records dropped: 905 output_compression: NoCompression
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.979694) EVENT_LOG_v1 {"time_micros": 1769095007979674, "job": 132, "event": "compaction_finished", "compaction_time_micros": 811546, "compaction_time_cpu_micros": 25578, "output_level": 6, "num_output_files": 1, "total_output_size": 10401068, "num_input_records": 14988, "num_output_records": 14083, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095007980302, "job": 132, "event": "table_file_deletion", "file_number": 206}
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000204.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095007982820, "job": 132, "event": "table_file_deletion", "file_number": 204}
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:46.998382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.982957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.982964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.982966) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.982968) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:16:47.982969) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:16:48 np0005592159 podman[276136]: 2026-01-22 15:16:48.008071214 +0000 UTC m=+0.055511135 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:16:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:48.177+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:48.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:49.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:49.184+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:49 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:49 np0005592159 ceph-mon[77081]: Health check update: 116 slow ops, oldest one blocked for 5997 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:49 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:49 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:50 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:50.202+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:50.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:51.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:51 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:51.219+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:52 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:52.204+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:52.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:16:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:53.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:16:53 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:53.170+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:54.147+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:54.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:54 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:55.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:55.141+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:56.125+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 60 ])
Jan 22 10:16:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:56.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:57 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:57 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:57.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:57.122+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:16:57 np0005592159 ceph-mon[77081]: Health check update: 116 slow ops, oldest one blocked for 6007 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:16:57 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 60 ])
Jan 22 10:16:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:58.159+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:16:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:16:58.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:16:58 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 73 ])
Jan 22 10:16:58 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:16:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:16:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:16:59.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:16:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:16:59.128+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:16:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:16:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:16:59 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:17:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:00.147+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 116 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:17:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:00.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:00 np0005592159 ceph-mon[77081]: 116 slow requests (by type [ 'delayed' : 116 ] most affected pool [ 'vms' : 79 ])
Jan 22 10:17:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:01.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:01.153+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 68 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 68 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 68 slow requests (by type [ 'delayed' : 68 ] most affected pool [ 'vms' : 47 ])
Jan 22 10:17:02 np0005592159 podman[276294]: 2026-01-22 15:17:02.01858838 +0000 UTC m=+0.078313463 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 22 10:17:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:02.144+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 32 ])
Jan 22 10:17:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:02.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:02 np0005592159 ceph-mon[77081]: 68 slow requests (by type [ 'delayed' : 68 ] most affected pool [ 'vms' : 47 ])
Jan 22 10:17:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:03.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:03.098+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 86 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 60 ])
Jan 22 10:17:04 np0005592159 ceph-mon[77081]: Health check update: 116 slow ops, oldest one blocked for 6012 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:04 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 32 ])
Jan 22 10:17:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:17:04 np0005592159 ceph-mon[77081]: 86 slow requests (by type [ 'delayed' : 86 ] most affected pool [ 'vms' : 60 ])
Jan 22 10:17:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:17:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:04.093+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 131 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 131 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 131 slow requests (by type [ 'delayed' : 131 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:04.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:05 np0005592159 ceph-mon[77081]: 131 slow requests (by type [ 'delayed' : 131 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:05.059+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 131 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 131 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 131 slow requests (by type [ 'delayed' : 131 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:05.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:06.013+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:06.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:07.038+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:07.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:07 np0005592159 ceph-mon[77081]: 131 slow requests (by type [ 'delayed' : 131 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:08.067+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:08 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:08 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:08 np0005592159 ceph-mon[77081]: Health check update: 131 slow ops, oldest one blocked for 6018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:08 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:08.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:09.036+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:09.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:09 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:10.019+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:10.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:10.977+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:11.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:11 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:11.937+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:12.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:12 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:17:12 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:12.890+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:13.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:13.883+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:14 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 6023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:14 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:14.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:14.879+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:15.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:15 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:15.831+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:16.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:16 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:16 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:16.788+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:17.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:17.817+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:18 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:18.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:18.784+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:19 np0005592159 podman[276430]: 2026-01-22 15:17:19.030382578 +0000 UTC m=+0.084222717 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:17:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:17:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:19.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:17:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:19.789+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:19 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:20.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:20.783+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:17:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:21.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:17:21 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:21 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:21.738+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:22.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:22.774+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:23.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:23 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:23 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:23 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 6028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:23.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:24.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:24.760+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:25 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:17:25.022 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:17:25 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:17:25.024 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:17:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:25.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:25 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:25 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:25.767+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:26.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:26.776+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:27 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:17:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:27.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:17:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:27.818+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:28 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:28 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:28 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 6038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:28.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:28.809+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:29.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:29.798+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:30.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:30.823+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:31.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:31.794+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:32.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:32.783+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:33 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:17:33.027 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:17:33 np0005592159 podman[276507]: 2026-01-22 15:17:33.072975501 +0000 UTC m=+0.119905402 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:17:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:33.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:33.760+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:34 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:34 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:34 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:34 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:34 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:34 np0005592159 ceph-mon[77081]: Health check update: 12 slow ops, oldest one blocked for 6043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:34.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:34.745+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 12 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:17:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:35.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:17:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:35.765+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 30 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:17:36 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:36.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:36.768+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:17:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:37.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:37.741+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 83 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:17:37 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:37 np0005592159 ceph-mon[77081]: 12 slow requests (by type [ 'delayed' : 12 ] most affected pool [ 'vms' : 9 ])
Jan 22 10:17:37 np0005592159 ceph-mon[77081]: 30 slow requests (by type [ 'delayed' : 30 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:17:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:38.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:38.754+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:38 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:17:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:39.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:39.717+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:39 np0005592159 ceph-mon[77081]: 83 slow requests (by type [ 'delayed' : 83 ] most affected pool [ 'vms' : 55 ])
Jan 22 10:17:39 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:40.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:40.691+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:41.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:41.671+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:42.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:42.628+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:43.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:43 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:43 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:43 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:43 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:43.634+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:44.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:44.646+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:44 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:45.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:45.667+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:46 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:46 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:46.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:46.698+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:47.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:17:47.253 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:17:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:17:47.254 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:17:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:17:47.254 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:17:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:47.656+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:48.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:48.669+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:49.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:49 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:49 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:49 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:49 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:49.669+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:49 np0005592159 podman[276592]: 2026-01-22 15:17:49.990129071 +0000 UTC m=+0.053242236 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:17:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:50.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:50.672+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:50 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:51.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:51.663+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:52.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:52.664+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:53.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:53.657+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:53 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:54.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:54.693+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:54 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:54 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:54 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:54 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:54 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:17:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:55.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:55.737+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:55 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:56.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:56.733+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:17:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:57.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:17:57 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:57 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:17:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:57.700+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:17:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:17:58.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:17:58 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:58.689+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:17:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:17:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:17:59.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:17:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:17:59.641+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:17:59 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:17:59 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:00.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:00.657+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:01 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:01.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:01.668+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:02.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:02 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:02.698+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:03.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:03 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:03 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:03.717+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:04 np0005592159 podman[276618]: 2026-01-22 15:18:04.083265251 +0000 UTC m=+0.130994493 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Jan 22 10:18:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:04.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:04.758+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:05 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:05 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:05.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:05.786+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:06.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:06 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:06 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:06.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:07.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:07.745+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:07 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:07 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:08.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:08.791+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:08 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:09.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:09.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:09 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:10.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:10.771+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:10 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:11.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:11.739+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:12 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:12.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:12.744+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:13.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:13 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:13 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:13.717+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:14 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:18:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:18:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:14.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:14.671+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:15.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:15 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:15.657+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:16 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:16 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:16.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:16.707+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:17.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:17 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:17 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:17.698+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:18 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:18.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:18.684+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:19.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:19 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:19.694+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:20.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:20.691+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:20 np0005592159 podman[276859]: 2026-01-22 15:18:20.747399806 +0000 UTC m=+0.056288526 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 10:18:21 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:18:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:21.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:21.684+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:22.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:22.688+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:23.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:23 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:23 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:23.642+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:24.604+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:25.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:25 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:25 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:25 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:25.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:25.579+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:26 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:26.614+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:27.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:27.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:27.632+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:27 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:28 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:18:28.012 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:18:28 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:18:28.013 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:18:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:28 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:28 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:28.681+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:29.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:29.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:29 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:29.665+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:30.708+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:30 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:31.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:31.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:31.696+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:32 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:32.744+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:33 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:33 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:33.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:33.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:33.704+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:34 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:34.749+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:34 np0005592159 podman[276963]: 2026-01-22 15:18:34.998848893 +0000 UTC m=+0.065241580 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 22 10:18:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:35.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:35 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:35.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:35.707+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:36 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:36.716+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:37 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:18:37.017 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:18:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:18:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:37.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:18:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:37.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:37 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:37.762+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:38 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:38 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:38.759+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:39.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:39.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:39 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:39.793+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:40 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:40 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:40.818+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:41.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:41.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:41.858+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:42 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:42.903+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:43.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:43.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:43 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:43 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:43.937+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:44 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:44 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:44.962+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:45.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:45.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:45 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:45.915+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:46.903+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:47.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:47 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:47.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:18:47.254 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:18:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:18:47.255 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:18:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:18:47.255 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:18:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:47.931+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #208. Immutable memtables: 0.
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.074149) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 208
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128074173, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 1889, "num_deletes": 459, "total_data_size": 3515101, "memory_usage": 3585424, "flush_reason": "Manual Compaction"}
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #209: started
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128096614, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 209, "file_size": 2286898, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 100844, "largest_seqno": 102728, "table_properties": {"data_size": 2279320, "index_size": 3943, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 23062, "raw_average_key_size": 22, "raw_value_size": 2261434, "raw_average_value_size": 2217, "num_data_blocks": 171, "num_entries": 1020, "num_filter_entries": 1020, "num_deletions": 459, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095006, "oldest_key_time": 1769095006, "file_creation_time": 1769095128, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 22532 microseconds, and 5454 cpu microseconds.
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.096676) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #209: 2286898 bytes OK
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.096695) [db/memtable_list.cc:519] [default] Level-0 commit table #209 started
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.098633) [db/memtable_list.cc:722] [default] Level-0 commit table #209: memtable #1 done
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.098647) EVENT_LOG_v1 {"time_micros": 1769095128098643, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.098663) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 3505427, prev total WAL file size 3505691, number of live WAL files 2.
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000205.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.100693) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034373833' seq:72057594037927935, type:22 .. '6C6F676D0035303335' seq:0, type:0; will stop at (end)
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [209(2233KB)], [207(10157KB)]
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128100762, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [209], "files_L6": [207], "score": -1, "input_data_size": 12687966, "oldest_snapshot_seqno": -1}
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #210: 14170 keys, 12485228 bytes, temperature: kUnknown
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128200425, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 210, "file_size": 12485228, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12407607, "index_size": 41092, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35461, "raw_key_size": 390167, "raw_average_key_size": 27, "raw_value_size": 12166067, "raw_average_value_size": 858, "num_data_blocks": 1492, "num_entries": 14170, "num_filter_entries": 14170, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095128, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 210, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.200701) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 12485228 bytes
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.202414) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 127.2 rd, 125.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.9 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(11.0) write-amplify(5.5) OK, records in: 15103, records dropped: 933 output_compression: NoCompression
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.202486) EVENT_LOG_v1 {"time_micros": 1769095128202461, "job": 134, "event": "compaction_finished", "compaction_time_micros": 99768, "compaction_time_cpu_micros": 34508, "output_level": 6, "num_output_files": 1, "total_output_size": 12485228, "num_input_records": 15103, "num_output_records": 14170, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128203294, "job": 134, "event": "table_file_deletion", "file_number": 209}
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000207.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095128206751, "job": 134, "event": "table_file_deletion", "file_number": 207}
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.100518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.206800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.206807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.206809) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.206810) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:18:48.206812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:18:48 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:48.902+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:49.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:18:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:49.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:18:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:49.860+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:50 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:50.902+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:51 np0005592159 podman[277047]: 2026-01-22 15:18:51.011055846 +0000 UTC m=+0.068089674 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 22 10:18:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:51.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:51 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:51 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:18:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:51.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:18:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:51.923+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:52 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:52.921+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:53.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:53.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:53.944+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:54 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:54 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:54 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:54.924+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:55.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:55.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:55.907+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:56 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:56.879+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:57.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:57.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:57 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:57.850+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:58 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:58 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:58 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:18:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:58.824+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:18:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:18:59.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:18:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:18:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:18:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:18:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:18:59.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:18:59 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:18:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:18:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:18:59.869+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:00 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:00 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:00.838+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:01.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:01.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:01.885+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:02 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:02.860+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:03.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:03.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:03 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:03 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:03.846+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:04 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:04 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:04.806+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:05.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:05.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:05.789+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:05 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:06 np0005592159 podman[277074]: 2026-01-22 15:19:06.075540091 +0000 UTC m=+0.132834041 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 22 10:19:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:06.820+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:06 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:07.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:07.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:07.868+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:08.909+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:09 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:09 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:09.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:09.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:09.904+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:10 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:10 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:10.898+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:19:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:11.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:19:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:11.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:11 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:11 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:11.887+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:12 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:12.849+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:13.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:13.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:13 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:13 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:13.895+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:14 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:14.890+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:15.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:15.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:15.853+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:15 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:16.805+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:16 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:17.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:17.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:17.809+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:18 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:18 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:18.790+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:19:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:19.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:19:19 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:19.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:19.833+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:20 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:20.873+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:21.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:21 np0005592159 podman[277231]: 2026-01-22 15:19:21.175045775 +0000 UTC m=+0.083621132 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:19:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:21.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:21 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:21 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:21.923+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:22.911+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:22 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:22 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:22 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:19:22 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:22 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:19:22 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:23.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:23.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:23.916+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:24.871+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:25 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:25.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:25.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:25.847+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:26 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:26 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:26.881+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:27.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:27.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:27.886+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:27 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:28.871+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:29 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:29 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:29 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:29.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:29.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:29.914+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:30 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:30.876+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:31.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:31.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:31 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:31 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:31 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:31 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:19:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:31.927+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:32.932+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 137 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:33 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:33 np0005592159 ceph-mon[77081]: Health check update: 137 slow ops, oldest one blocked for 6163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:33.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:33.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:33.892+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:34 np0005592159 ceph-mon[77081]: 137 slow requests (by type [ 'delayed' : 137 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:19:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:34.922+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:35.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:35 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:35.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:35.894+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:36 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:36.845+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:37 np0005592159 podman[277414]: 2026-01-22 15:19:37.014111267 +0000 UTC m=+0.074867930 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 22 10:19:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:37.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:37 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:37.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:37.838+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:38 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:38 np0005592159 ceph-mon[77081]: Health check update: 25 slow ops, oldest one blocked for 6168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:38 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:38.876+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:39.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 10:19:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:39.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 10:19:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:39.849+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:40 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:40.880+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:41.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:41.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:41 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:41.841+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:42 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:42 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:42.875+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:43.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:19:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:43.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:19:43 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:43.849+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:44 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:44.841+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:45.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:45.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:45.855+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:46 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:46.886+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:47.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:19:47.256 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:19:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:19:47.256 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:19:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:19:47.257 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:19:47 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:47 np0005592159 ceph-mon[77081]: Health check update: 25 slow ops, oldest one blocked for 6178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:19:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:47.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:19:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:47.930+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:48 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:48.896+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:49.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:49.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:49.876+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:49 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:49 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:50.857+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:51.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:51 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:19:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:51.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:19:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:51.834+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:51 np0005592159 podman[277498]: 2026-01-22 15:19:51.997742907 +0000 UTC m=+0.057867809 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:19:52 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:52.806+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:53.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:53.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:53.802+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:53 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:53 np0005592159 ceph-mon[77081]: Health check update: 25 slow ops, oldest one blocked for 6183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:53 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:54.838+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:55.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #211. Immutable memtables: 0.
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.223983) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 211
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195224066, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 1180, "num_deletes": 362, "total_data_size": 1970507, "memory_usage": 1996432, "flush_reason": "Manual Compaction"}
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #212: started
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195238270, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 212, "file_size": 1293702, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 102734, "largest_seqno": 103908, "table_properties": {"data_size": 1288682, "index_size": 2223, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 15155, "raw_average_key_size": 22, "raw_value_size": 1277050, "raw_average_value_size": 1864, "num_data_blocks": 95, "num_entries": 685, "num_filter_entries": 685, "num_deletions": 362, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095128, "oldest_key_time": 1769095128, "file_creation_time": 1769095195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 14401 microseconds, and 8316 cpu microseconds.
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.238390) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #212: 1293702 bytes OK
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.238421) [db/memtable_list.cc:519] [default] Level-0 commit table #212 started
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.240530) [db/memtable_list.cc:722] [default] Level-0 commit table #212: memtable #1 done
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.240549) EVENT_LOG_v1 {"time_micros": 1769095195240542, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.240571) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 1964213, prev total WAL file size 1964213, number of live WAL files 2.
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000208.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.241671) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [212(1263KB)], [210(11MB)]
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195241716, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [212], "files_L6": [210], "score": -1, "input_data_size": 13778930, "oldest_snapshot_seqno": -1}
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #213: 14116 keys, 12015602 bytes, temperature: kUnknown
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195331669, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 213, "file_size": 12015602, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11938470, "index_size": 40731, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35333, "raw_key_size": 389314, "raw_average_key_size": 27, "raw_value_size": 11697975, "raw_average_value_size": 828, "num_data_blocks": 1475, "num_entries": 14116, "num_filter_entries": 14116, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 213, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.332232) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 12015602 bytes
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.334484) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.8 rd, 133.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 11.9 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(19.9) write-amplify(9.3) OK, records in: 14855, records dropped: 739 output_compression: NoCompression
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.334521) EVENT_LOG_v1 {"time_micros": 1769095195334504, "job": 136, "event": "compaction_finished", "compaction_time_micros": 90183, "compaction_time_cpu_micros": 29430, "output_level": 6, "num_output_files": 1, "total_output_size": 12015602, "num_input_records": 14855, "num_output_records": 14116, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195335697, "job": 136, "event": "table_file_deletion", "file_number": 212}
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000210.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095195341241, "job": 136, "event": "table_file_deletion", "file_number": 210}
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.241620) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.341518) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.341525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.341526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.341528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:19:55.341529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:19:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:19:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:55.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:19:55 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:55.805+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:56 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:56 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:56.761+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:19:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:57.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:19:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:57.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:57.714+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:58 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:58.686+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:59 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:19:59 np0005592159 ceph-mon[77081]: Health check update: 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:19:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:19:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:19:59.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:19:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:19:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:19:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:19:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:19:59.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:19:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:19:59.722+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:19:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:00 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:00 np0005592159 ceph-mon[77081]: Health detail: HEALTH_WARN 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops
Jan 22 10:20:00 np0005592159 ceph-mon[77081]: [WRN] SLOW_OPS: 25 slow ops, oldest one blocked for 6188 sec, osd.2 has slow ops
Jan 22 10:20:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:00.752+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:01.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:01.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:01 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:01 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:01.735+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:02 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:02.708+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 25 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:03.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:03.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:03.726+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:03 np0005592159 ceph-mon[77081]: 25 slow requests (by type [ 'delayed' : 25 ] most affected pool [ 'vms' : 18 ])
Jan 22 10:20:03 np0005592159 ceph-mon[77081]: Health check update: 25 slow ops, oldest one blocked for 6193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:04.748+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:05.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:05 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:05.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:05.713+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:06 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:06.700+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:07.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:07.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:07 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:07.728+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:08 np0005592159 podman[277575]: 2026-01-22 15:20:08.050161948 +0000 UTC m=+0.102321729 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 22 10:20:08 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:08 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:08 np0005592159 ceph-mon[77081]: Health check update: 72 slow ops, oldest one blocked for 6198 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:08.749+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:09.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:09.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:09.748+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:10 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:10.785+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:11.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:11.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:11 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:11 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:11.768+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:12.736+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:13.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:13.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:13 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:13 np0005592159 ceph-mon[77081]: Health check update: 72 slow ops, oldest one blocked for 6203 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:13.778+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:14.771+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:15 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:15 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:15.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:15.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:15.735+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:16 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:16.719+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:17.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:17.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:17.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:17 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:17 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:18.628+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:19.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:19 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:19 np0005592159 ceph-mon[77081]: Health check update: 72 slow ops, oldest one blocked for 6208 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:19.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:19.644+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:20.685+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:21.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:21.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:21 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:21 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:21.638+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:22.630+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:23 np0005592159 podman[277609]: 2026-01-22 15:20:23.026850113 +0000 UTC m=+0.083014887 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, managed_by=edpm_ansible)
Jan 22 10:20:23 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:23 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:23.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:23.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:23.604+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:24 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:24.583+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:25.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:25.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:25 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:25.556+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 72 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:26.584+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:26 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:26 np0005592159 ceph-mon[77081]: 72 slow requests (by type [ 'delayed' : 72 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:20:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:27.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:27.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:27.537+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:27 np0005592159 ceph-mon[77081]: Health check update: 72 slow ops, oldest one blocked for 6218 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:27 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:28.573+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:29 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:29.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:29.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:29.591+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:30 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:30.610+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:31.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:20:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:31.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:20:31 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:31 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:31.567+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:32 np0005592159 podman[277974]: 2026-01-22 15:20:32.496823871 +0000 UTC m=+0.056249225 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Jan 22 10:20:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:32.542+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:32 np0005592159 podman[277994]: 2026-01-22 15:20:32.651588812 +0000 UTC m=+0.059793749 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:20:32 np0005592159 podman[277974]: 2026-01-22 15:20:32.657510259 +0000 UTC m=+0.216935613 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Jan 22 10:20:32 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:32 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:32 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:32 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:20:32 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:20:32 np0005592159 ceph-mon[77081]: Health check update: 139 slow ops, oldest one blocked for 6223 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 10:20:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:33.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 10:20:33 np0005592159 podman[278127]: 2026-01-22 15:20:33.343239265 +0000 UTC m=+0.058258429 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 10:20:33 np0005592159 podman[278127]: 2026-01-22 15:20:33.353955719 +0000 UTC m=+0.068974863 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 10:20:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:33.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:33 np0005592159 podman[278194]: 2026-01-22 15:20:33.57422469 +0000 UTC m=+0.058557946 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, io.buildah.version=1.28.2, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=2.2.4, io.openshift.tags=Ceph keepalived, release=1793, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Jan 22 10:20:33 np0005592159 podman[278194]: 2026-01-22 15:20:33.588849369 +0000 UTC m=+0.073182585 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, name=keepalived, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, version=2.2.4, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793)
Jan 22 10:20:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:33.590+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:33 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:34.618+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:34 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:20:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:20:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:35.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:20:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:35.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:20:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:35.654+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:36 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:36.616+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:37 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:37.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:37.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:37.634+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:38.592+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:38 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:39 np0005592159 podman[278362]: 2026-01-22 15:20:39.085280129 +0000 UTC m=+0.132423699 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Jan 22 10:20:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:39.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:39.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:39.568+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:39 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:39 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:39 np0005592159 ceph-mon[77081]: Health check update: 139 slow ops, oldest one blocked for 6228 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:40.537+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:41.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:41.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:41 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:41.525+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:42.484+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:43.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:43.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:43.463+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:44.511+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:45.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:45.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:45.495+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:46 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:46 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:46.525+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:20:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:47.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:20:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:20:47.257 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:20:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:20:47.258 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:20:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:20:47.258 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:20:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:47.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:47.513+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:47 np0005592159 ceph-mon[77081]: Health check update: 139 slow ops, oldest one blocked for 6233 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:48.556+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:49.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:49.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:49 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:49.571+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:50.551+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:51 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:51 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:20:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:51.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:20:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:51.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:51.575+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:52 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:52 np0005592159 ceph-mon[77081]: Health check update: 139 slow ops, oldest one blocked for 6238 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:52.534+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:53.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:20:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:53.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:20:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:53.490+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:53 np0005592159 podman[278446]: 2026-01-22 15:20:53.989285873 +0000 UTC m=+0.051702984 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:20:54 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:54 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:54.472+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:55.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:55 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:20:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:55.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:55.502+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:56.548+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:56 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:56 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:57.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:57.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:57.543+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:20:58 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:20:58 np0005592159 ceph-mon[77081]: Health check update: 139 slow ops, oldest one blocked for 6243 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:20:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:58.571+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:20:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:20:59.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:20:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:20:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:20:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:20:59.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:20:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:20:59.598+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:20:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:00 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:00 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:00.631+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:01.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:01 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:01.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:01.585+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:02 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:02 np0005592159 ceph-mon[77081]: Health check update: 139 slow ops, oldest one blocked for 6247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:02.565+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:03.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:03.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:03.528+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:03 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:03 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:04 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:04.568+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:05.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:05.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:05.562+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:05 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:06.591+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:06 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:07.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:07.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:07.627+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:07 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:07 np0005592159 ceph-mon[77081]: Health check update: 76 slow ops, oldest one blocked for 6258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:07 np0005592159 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 10:21:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:08.660+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:08 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:09.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:09.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:09.666+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:10 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:10 np0005592159 podman[278574]: 2026-01-22 15:21:10.085533267 +0000 UTC m=+0.133575900 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 22 10:21:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:10.714+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:11 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:11.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:11.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:11.756+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:12 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:12.760+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:13.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:13.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:13 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:13 np0005592159 ceph-mon[77081]: Health check update: 76 slow ops, oldest one blocked for 6263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:13 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:13.741+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:14.693+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:14 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:15.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:15.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:15.684+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:16 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:16.666+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:17.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:17 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:17 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:17.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:17.717+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:18 np0005592159 ceph-mon[77081]: Health check update: 76 slow ops, oldest one blocked for 6268 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:18 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:21:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4178158463' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:21:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:21:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4178158463' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:21:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:18.723+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:19.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:19.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:19.715+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:19 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:20.744+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:20 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:21.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:21.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:21.759+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:21 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:22.755+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:22 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:22 np0005592159 ceph-mon[77081]: Health check update: 76 slow ops, oldest one blocked for 6273 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:23.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:23.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:23.716+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:24 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:24.725+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:25 np0005592159 podman[278609]: 2026-01-22 15:21:25.027485931 +0000 UTC m=+0.082068721 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:21:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:25.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:25 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:25.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:25.726+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:26.690+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 76 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:27 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:27 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:27.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:27.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:27.666+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:28 np0005592159 ceph-mon[77081]: 76 slow requests (by type [ 'delayed' : 76 ] most affected pool [ 'vms' : 50 ])
Jan 22 10:21:28 np0005592159 ceph-mon[77081]: Health check update: 76 slow ops, oldest one blocked for 6278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:28.682+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:29.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:29 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:29.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:29.668+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:30.681+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:30 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:30 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:31.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:31.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:31.652+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:32 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:32.677+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:33 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:33 np0005592159 ceph-mon[77081]: Health check update: 140 slow ops, oldest one blocked for 6283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:33.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:33.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:33.684+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:34 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:34.666+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:35 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:35.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:35.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:35.713+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:36 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:36.695+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:37 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:37.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:37.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:37.745+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:38 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:38 np0005592159 ceph-mon[77081]: Health check update: 140 slow ops, oldest one blocked for 6288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:38.734+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:39.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:39.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:39.752+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:40 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:40 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 87 ])
Jan 22 10:21:40 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:40.761+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:41 np0005592159 podman[278686]: 2026-01-22 15:21:41.018140039 +0000 UTC m=+0.080207462 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 10:21:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:41.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:41.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:41.753+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:41 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:42.742+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:43 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:43 np0005592159 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 6293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:43.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:43.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:43.725+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:44 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:44.769+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:45 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:45.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:45.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:45.789+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:46 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:46.822+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:21:47.259 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:21:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:21:47.260 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:21:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:21:47.260 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:21:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:47.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:47.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:47 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:47 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:47 np0005592159 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 6298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:47.820+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:48 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:48.794+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:49.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:49.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:49.761+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:49 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:50.739+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:50 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:51.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:51.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:51.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:52 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:52.773+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:53 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:53 np0005592159 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 6303 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:53.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:53.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:53.802+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:54 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:54.764+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:55 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:55.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:55.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:55 np0005592159 podman[278793]: 2026-01-22 15:21:55.579214523 +0000 UTC m=+0.080779556 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 10:21:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:21:55.602 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:21:55 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:21:55.602 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:21:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:55.752+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:56 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:56 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:21:56 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:21:56.604 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:21:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:56.799+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:57.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:21:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:57.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:21:57 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:57 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:21:57 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:21:57 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:57.758+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:58 np0005592159 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 6308 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:21:58 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:58.748+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:21:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:21:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:21:59.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:21:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:21:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:21:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:21:59.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:21:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:21:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:21:59.717+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:21:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:00 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:00.748+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:01 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:01.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:01.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:01.777+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 63 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:02 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:02.749+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:03.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:03 np0005592159 ceph-mon[77081]: 63 slow requests (by type [ 'delayed' : 63 ] most affected pool [ 'vms' : 42 ])
Jan 22 10:22:03 np0005592159 ceph-mon[77081]: Health check update: 63 slow ops, oldest one blocked for 6313 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:03.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:03.759+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:04.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:04 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:04 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:05.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:05.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:05.740+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:06 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:06.743+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:07.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:07 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:07 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:22:07 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:22:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:07.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:07.730+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:08 np0005592159 ceph-mon[77081]: Health check update: 26 slow ops, oldest one blocked for 6318 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:08 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:08.780+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:09.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:22:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:09.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:22:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:09.749+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:09 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:10.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:11 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:11.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:11.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:11.744+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:12 np0005592159 podman[279027]: 2026-01-22 15:22:12.090423288 +0000 UTC m=+0.138592913 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 22 10:22:12 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:12.699+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:13.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:13 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:13 np0005592159 ceph-mon[77081]: Health check update: 26 slow ops, oldest one blocked for 6323 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:13.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:13.716+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:14 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:14 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:14.748+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:15.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:15.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:15.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:16 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:16.779+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:17.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:17 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.003000080s ======
Jan 22 10:22:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:17.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000080s
Jan 22 10:22:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:17.826+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:18 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:18 np0005592159 ceph-mon[77081]: Health check update: 26 slow ops, oldest one blocked for 6328 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:18 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:18.779+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:19.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:19.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:19 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:19.733+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:20.727+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:20 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:22:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:20 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:22:21 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:21.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:21.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:21.744+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:22 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:22.746+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:22:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:23.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:22:23 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:23 np0005592159 ceph-mon[77081]: Health check update: 26 slow ops, oldest one blocked for 6333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:23.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:23.709+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:24 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:24 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:24.682+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:25.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:22:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:25.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:22:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:25.724+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:25 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:25 np0005592159 podman[279063]: 2026-01-22 15:22:25.992190199 +0000 UTC m=+0.054384906 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:22:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:26.734+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:27 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:27.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:22:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:27.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:22:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:27.748+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:28 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:28 np0005592159 ceph-mon[77081]: Health check update: 26 slow ops, oldest one blocked for 6338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:28.727+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:29.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:29.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:29 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:29 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:29.755+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:30.787+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:31.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:31 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:31.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:31.795+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 26 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:32 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:32 np0005592159 ceph-mon[77081]: 26 slow requests (by type [ 'delayed' : 26 ] most affected pool [ 'vms' : 19 ])
Jan 22 10:22:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:32.760+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:22:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:33.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:22:33 np0005592159 ceph-mon[77081]: Health check update: 26 slow ops, oldest one blocked for 6343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:33 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:33.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:33.735+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:34 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:34.712+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:35.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:22:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:35.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:22:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:35.759+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:35 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:36.734+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:36 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:37.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:37.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:37.765+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:38 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:38 np0005592159 ceph-mon[77081]: Health check update: 27 slow ops, oldest one blocked for 6348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:38.805+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:22:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:39.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:22:39 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:39 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:39.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:39.854+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:40 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:40.842+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:41.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:41.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:41.836+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:41 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:42.808+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:42 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:42 np0005592159 ceph-mon[77081]: Health check update: 27 slow ops, oldest one blocked for 6353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:43 np0005592159 podman[279143]: 2026-01-22 15:22:43.030216919 +0000 UTC m=+0.091969084 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 10:22:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:43.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:43.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:43.759+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:43 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:44.728+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:44 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:45.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:22:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:45.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:22:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:45.687+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:45 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:46.662+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:46 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:22:47.260 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:22:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:22:47.260 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:22:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:22:47.261 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:22:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:47.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:47.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:47.658+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:47 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:47 np0005592159 ceph-mon[77081]: Health check update: 27 slow ops, oldest one blocked for 6358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:48.667+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:48 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:49.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:49.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:49.648+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:49 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:50.616+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:51 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:51.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:51.627+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:51.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:52 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:52.663+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:53 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:53 np0005592159 ceph-mon[77081]: Health check update: 27 slow ops, oldest one blocked for 6363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:53.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:53.623+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:53.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:54 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:54.648+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:55 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:55.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:22:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:55.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:22:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:55.658+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:56 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:56.694+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 27 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:57 np0005592159 podman[279226]: 2026-01-22 15:22:57.00122946 +0000 UTC m=+0.059932573 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:22:57 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:57.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:57.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:57.670+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 10:22:58 np0005592159 ceph-mon[77081]: 27 slow requests (by type [ 'delayed' : 27 ] most affected pool [ 'vms' : 20 ])
Jan 22 10:22:58 np0005592159 ceph-mon[77081]: Health check update: 27 slow ops, oldest one blocked for 6368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:22:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:58.712+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 10:22:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:22:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:22:59.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:22:59 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 10:22:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:22:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:22:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:22:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:22:59.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:22:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:22:59.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 52 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:22:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #214. Immutable memtables: 0.
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.603179) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 214
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380603250, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 2811, "num_deletes": 569, "total_data_size": 5296550, "memory_usage": 5378544, "flush_reason": "Manual Compaction"}
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #215: started
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: 52 slow requests (by type [ 'delayed' : 52 ] most affected pool [ 'vms' : 31 ])
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380629873, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 215, "file_size": 3454007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 103913, "largest_seqno": 106719, "table_properties": {"data_size": 3443431, "index_size": 5853, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3653, "raw_key_size": 33366, "raw_average_key_size": 23, "raw_value_size": 3417977, "raw_average_value_size": 2380, "num_data_blocks": 250, "num_entries": 1436, "num_filter_entries": 1436, "num_deletions": 569, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095195, "oldest_key_time": 1769095195, "file_creation_time": 1769095380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 26740 microseconds, and 11081 cpu microseconds.
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.629930) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #215: 3454007 bytes OK
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.629956) [db/memtable_list.cc:519] [default] Level-0 commit table #215 started
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.632129) [db/memtable_list.cc:722] [default] Level-0 commit table #215: memtable #1 done
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.632147) EVENT_LOG_v1 {"time_micros": 1769095380632141, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.632167) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 5282664, prev total WAL file size 5282664, number of live WAL files 2.
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000211.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.634077) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [215(3373KB)], [213(11MB)]
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380634119, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [215], "files_L6": [213], "score": -1, "input_data_size": 15469609, "oldest_snapshot_seqno": -1}
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #216: 14399 keys, 13609583 bytes, temperature: kUnknown
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380740088, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 216, "file_size": 13609583, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13528898, "index_size": 43580, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36037, "raw_key_size": 394666, "raw_average_key_size": 27, "raw_value_size": 13281876, "raw_average_value_size": 922, "num_data_blocks": 1597, "num_entries": 14399, "num_filter_entries": 14399, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095380, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 216, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.740387) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 13609583 bytes
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.741952) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.9 rd, 128.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 11.5 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 15552, records dropped: 1153 output_compression: NoCompression
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.741968) EVENT_LOG_v1 {"time_micros": 1769095380741960, "job": 138, "event": "compaction_finished", "compaction_time_micros": 106045, "compaction_time_cpu_micros": 44121, "output_level": 6, "num_output_files": 1, "total_output_size": 13609583, "num_input_records": 15552, "num_output_records": 14399, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:23:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:00.741+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380742686, "job": 138, "event": "table_file_deletion", "file_number": 215}
Jan 22 10:23:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000213.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095380745099, "job": 138, "event": "table_file_deletion", "file_number": 213}
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.633959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.745219) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.745227) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.745230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.745233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:00 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:00.745236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:01.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:01 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:01.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:01.729+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:02.776+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:03 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:03.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:03.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:03.807+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:04 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:04.765+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:05 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:23:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:05.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:23:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:05.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:05.720+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:06 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:06.675+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:07 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:07 np0005592159 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:07.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:07.643+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:07.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:08 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:08 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:23:08 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:23:08 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:23:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:08.608+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:09 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:09.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:09.652+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:09.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:10 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:10.670+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:11 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:11 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:11.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:11.651+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:23:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:11.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:23:12 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:12.614+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:13.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:13.587+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:13 np0005592159 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:13 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:23:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:13.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:23:14 np0005592159 podman[279434]: 2026-01-22 15:23:14.069638246 +0000 UTC m=+0.130684532 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:23:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:14.584+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:14 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:15.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:15.605+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:15.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:16 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:23:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:23:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:16.585+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:17.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:17.557+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:17.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #217. Immutable memtables: 0.
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.729496) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 139] Flushing memtable with next log file: 217
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397729652, "job": 139, "event": "flush_started", "num_memtables": 1, "num_entries": 514, "num_deletes": 278, "total_data_size": 529492, "memory_usage": 538816, "flush_reason": "Manual Compaction"}
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 139] Level-0 flush table #218: started
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397733505, "cf_name": "default", "job": 139, "event": "table_file_creation", "file_number": 218, "file_size": 315797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 106724, "largest_seqno": 107233, "table_properties": {"data_size": 313176, "index_size": 592, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 8080, "raw_average_key_size": 21, "raw_value_size": 307427, "raw_average_value_size": 819, "num_data_blocks": 25, "num_entries": 375, "num_filter_entries": 375, "num_deletions": 278, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095381, "oldest_key_time": 1769095381, "file_creation_time": 1769095397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 139] Flush lasted 4096 microseconds, and 1353 cpu microseconds.
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.733595) [db/flush_job.cc:967] [default] [JOB 139] Level-0 flush table #218: 315797 bytes OK
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.733646) [db/memtable_list.cc:519] [default] Level-0 commit table #218 started
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735136) [db/memtable_list.cc:722] [default] Level-0 commit table #218: memtable #1 done
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735147) EVENT_LOG_v1 {"time_micros": 1769095397735144, "job": 139, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735162) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 139] Try to delete WAL files size 526300, prev total WAL file size 526300, number of live WAL files 2.
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000214.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735839) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303037' seq:72057594037927935, type:22 .. '6D6772737461740033323539' seq:0, type:0; will stop at (end)
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 140] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 139 Base level 0, inputs: [218(308KB)], [216(12MB)]
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397735885, "job": 140, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [218], "files_L6": [216], "score": -1, "input_data_size": 13925380, "oldest_snapshot_seqno": -1}
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 140] Generated table #219: 14208 keys, 10036399 bytes, temperature: kUnknown
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397796752, "cf_name": "default", "job": 140, "event": "table_file_creation", "file_number": 219, "file_size": 10036399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9961674, "index_size": 38132, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35525, "raw_key_size": 390776, "raw_average_key_size": 27, "raw_value_size": 9722702, "raw_average_value_size": 684, "num_data_blocks": 1369, "num_entries": 14208, "num_filter_entries": 14208, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095397, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 219, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.797148) [db/compaction/compaction_job.cc:1663] [default] [JOB 140] Compacted 1@0 + 1@6 files to L6 => 10036399 bytes
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.798591) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.4 rd, 164.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 13.0 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(75.9) write-amplify(31.8) OK, records in: 14774, records dropped: 566 output_compression: NoCompression
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.798629) EVENT_LOG_v1 {"time_micros": 1769095397798613, "job": 140, "event": "compaction_finished", "compaction_time_micros": 60971, "compaction_time_cpu_micros": 27932, "output_level": 6, "num_output_files": 1, "total_output_size": 10036399, "num_input_records": 14774, "num_output_records": 14208, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397798949, "job": 140, "event": "table_file_deletion", "file_number": 218}
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000216.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095397804127, "job": 140, "event": "table_file_deletion", "file_number": 216}
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.735766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.804194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.804202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.804206) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.804211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:17 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:17.804215) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:18 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:18 np0005592159 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:18.514+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:19 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:19.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:19.564+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:19.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:20 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:20.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:21 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:21.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:21.495+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:21.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:22 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:22.463+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:23 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:23 np0005592159 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:23.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:23.475+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:23.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:24 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:24.525+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:25 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:25.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:25.513+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:25.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:26.525+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:26 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:27.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:27.490+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:27.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:27 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:27 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:27 np0005592159 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:27 np0005592159 podman[279518]: 2026-01-22 15:23:27.975259469 +0000 UTC m=+0.040503887 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 10:23:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:28.537+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:29 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:29.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:29.556+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:29.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:30 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:30.538+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:31.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:31 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:31.528+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:31.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:32.514+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:32 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:32 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:33.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:33.561+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:33.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:33 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:33 np0005592159 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:34.522+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:35 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:23:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:35.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:23:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:35.485+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:35.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:36.495+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:36 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:37.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:37.525+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:23:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:37.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:23:37 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:37 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:38.512+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:39 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:39 np0005592159 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:39.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:39.479+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:39.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:40 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:40 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:40.500+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:41.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:41.491+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:23:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:41.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:23:41 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:42.467+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:43.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:43.484+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:43.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:43 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:43 np0005592159 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6412 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:44.456+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:44 np0005592159 podman[279595]: 2026-01-22 15:23:44.584610543 +0000 UTC m=+0.101655612 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller)
Jan 22 10:23:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:44 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:44 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:45.409+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:45.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:45.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:45 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:46.452+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:46 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:23:47.261 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:23:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:23:47.263 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:23:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:23:47.263 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:23:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:47.403+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:23:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:47.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:23:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:47.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:47 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:47 np0005592159 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6417 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:48.427+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:48 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:49.402+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:23:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:49.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:23:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:49.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:50 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:50.411+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:51 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:51.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:51.452+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:51.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:52 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:52.494+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:53 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:53 np0005592159 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6422 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:53.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:53.472+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:53.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:54 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:54.444+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:23:55 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:55.443+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:55.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:55.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:56 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:56.448+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:23:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:57.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:23:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:57.458+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:57.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #220. Immutable memtables: 0.
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.775537) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 141] Flushing memtable with next log file: 220
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437775572, "job": 141, "event": "flush_started", "num_memtables": 1, "num_entries": 814, "num_deletes": 325, "total_data_size": 1095315, "memory_usage": 1111760, "flush_reason": "Manual Compaction"}
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 141] Level-0 flush table #221: started
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437783810, "cf_name": "default", "job": 141, "event": "table_file_creation", "file_number": 221, "file_size": 717997, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 107238, "largest_seqno": 108047, "table_properties": {"data_size": 714385, "index_size": 1199, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 10658, "raw_average_key_size": 20, "raw_value_size": 706207, "raw_average_value_size": 1368, "num_data_blocks": 53, "num_entries": 516, "num_filter_entries": 516, "num_deletions": 325, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095398, "oldest_key_time": 1769095398, "file_creation_time": 1769095437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 221, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 141] Flush lasted 8339 microseconds, and 2842 cpu microseconds.
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.783871) [db/flush_job.cc:967] [default] [JOB 141] Level-0 flush table #221: 717997 bytes OK
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.783896) [db/memtable_list.cc:519] [default] Level-0 commit table #221 started
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.787223) [db/memtable_list.cc:722] [default] Level-0 commit table #221: memtable #1 done
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.787240) EVENT_LOG_v1 {"time_micros": 1769095437787235, "job": 141, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.787257) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 141] Try to delete WAL files size 1090708, prev total WAL file size 1090708, number of live WAL files 2.
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000217.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.787795) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035303334' seq:72057594037927935, type:22 .. '6C6F676D0035323837' seq:0, type:0; will stop at (end)
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 142] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 141 Base level 0, inputs: [221(701KB)], [219(9801KB)]
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437787860, "job": 142, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [221], "files_L6": [219], "score": -1, "input_data_size": 10754396, "oldest_snapshot_seqno": -1}
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 142] Generated table #222: 14065 keys, 10584053 bytes, temperature: kUnknown
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437868384, "cf_name": "default", "job": 142, "event": "table_file_creation", "file_number": 222, "file_size": 10584053, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10509312, "index_size": 38468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35205, "raw_key_size": 388461, "raw_average_key_size": 27, "raw_value_size": 10271914, "raw_average_value_size": 730, "num_data_blocks": 1380, "num_entries": 14065, "num_filter_entries": 14065, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 222, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.868666) [db/compaction/compaction_job.cc:1663] [default] [JOB 142] Compacted 1@0 + 1@6 files to L6 => 10584053 bytes
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.870527) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.5 rd, 131.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.6 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(29.7) write-amplify(14.7) OK, records in: 14724, records dropped: 659 output_compression: NoCompression
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.870553) EVENT_LOG_v1 {"time_micros": 1769095437870536, "job": 142, "event": "compaction_finished", "compaction_time_micros": 80574, "compaction_time_cpu_micros": 28607, "output_level": 6, "num_output_files": 1, "total_output_size": 10584053, "num_input_records": 14724, "num_output_records": 14065, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000221.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437870814, "job": 142, "event": "table_file_deletion", "file_number": 221}
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000219.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095437873091, "job": 142, "event": "table_file_deletion", "file_number": 219}
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.787685) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.873154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.873158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.873160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.873161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:57 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:23:57.873163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:23:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:58.451+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:58 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:58 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:58 np0005592159 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6427 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:23:59 np0005592159 podman[279680]: 2026-01-22 15:23:59.033184224 +0000 UTC m=+0.080642910 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 22 10:23:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:23:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:23:59.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:23:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:23:59.491+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:23:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:23:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:23:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:23:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:23:59.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:23:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:00 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:00.458+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:01 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:01.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:01.461+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:01.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:02.501+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:02 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:02 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000053s ======
Jan 22 10:24:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:03.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 22 10:24:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:03.480+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:03.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:04 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:04 np0005592159 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:04.443+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:05 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:05.434+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:05.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:05.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:06 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:06.421+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:07 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:07.409+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:07.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:07.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:08 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:08 np0005592159 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6437 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:08.441+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:09 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:09.393+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:09.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:09.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:10 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:10.415+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:11 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:11.435+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:11.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:11.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:12.439+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:12 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:13.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:13.466+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 156 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:13.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:13 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:13 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:13 np0005592159 ceph-mon[77081]: Health check update: 156 slow ops, oldest one blocked for 6442 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:14.502+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:15 np0005592159 podman[279758]: 2026-01-22 15:24:15.053452398 +0000 UTC m=+0.104765490 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 22 10:24:15 np0005592159 ceph-mon[77081]: 156 slow requests (by type [ 'delayed' : 156 ] most affected pool [ 'vms' : 92 ])
Jan 22 10:24:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:15.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:15.518+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:15.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:16 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:16.491+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:17 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:24:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:24:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:24:17 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:24:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:17.446+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:17.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:17.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:18 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:18 np0005592159 ceph-mon[77081]: Health check update: 71 slow ops, oldest one blocked for 6448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:18.477+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:19 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:19 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:19.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:19.498+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:19.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:20 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:20.474+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:21 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:21.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:21.493+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:21.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:22 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:22.513+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:23.469+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:23.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:23 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:23 np0005592159 ceph-mon[77081]: Health check update: 71 slow ops, oldest one blocked for 6453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:23 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:24:23 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:24:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:23.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:24.509+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:24 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:25.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:25.529+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:25 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:25.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:26.566+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:26 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:27.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:27.615+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:27.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:27 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:27 np0005592159 ceph-mon[77081]: Health check update: 71 slow ops, oldest one blocked for 6458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:28.593+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:29 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:29.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:29.580+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:29.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:30 np0005592159 podman[280022]: 2026-01-22 15:24:30.014904432 +0000 UTC m=+0.072440343 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:24:30 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:30.559+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:31 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:31.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:31.553+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:31.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:32.520+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:32 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:33.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:33.507+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:33.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:34 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:34 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:34.480+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:35 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:35.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:35.500+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:35.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:36 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:36.494+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:37.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:37 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:37 np0005592159 ceph-mon[77081]: Health check update: 71 slow ops, oldest one blocked for 6467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:37.521+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:37.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:38.563+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:38 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:38 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:39.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:39.553+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:39.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:39 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:40.521+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:41 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:41.484+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:41.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:41.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:42 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:42.518+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:43 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:43 np0005592159 ceph-mon[77081]: Health check update: 71 slow ops, oldest one blocked for 6473 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:43.498+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:43.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:43.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:44 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 44 ])
Jan 22 10:24:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:44.533+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:45 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:45.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:45.560+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:45.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:46 np0005592159 podman[280050]: 2026-01-22 15:24:46.057580032 +0000 UTC m=+0.111521189 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 22 10:24:46 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:46.547+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:24:47.263 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:24:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:24:47.264 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:24:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:24:47.264 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:24:47 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:47 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:47.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:47.563+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:47.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:48.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:48 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:48 np0005592159 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 6478 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:49.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:49.559+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:49 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:49.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:50.563+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:50 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:51.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:51.612+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:51.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:51 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:52.585+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:53 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:53 np0005592159 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 6483 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:53.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:53.596+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:53.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:54 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:54.640+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 74 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:24:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:55.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:55 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:55 np0005592159 ceph-mon[77081]: 74 slow requests (by type [ 'delayed' : 74 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:24:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:55.650+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:55.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:56 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:56.657+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:24:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:57.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:24:57 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:57.690+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:57.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:58 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:58 np0005592159 ceph-mon[77081]: Health check update: 74 slow ops, oldest one blocked for 6488 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:24:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:58.676+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:24:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:24:59.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:24:59 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:24:59.706+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:24:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:24:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:24:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:24:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:24:59.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:24:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:00 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:00.731+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:01 np0005592159 podman[280133]: 2026-01-22 15:25:01.045659893 +0000 UTC m=+0.088545260 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 22 10:25:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:01.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:01.725+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:01.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:01 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:02.766+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:03 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:03 np0005592159 ceph-mon[77081]: Health check update: 108 slow ops, oldest one blocked for 6493 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:03.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:03.813+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:03.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:04 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:04.818+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:05 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:05.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:05.812+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:05.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:06 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:06.801+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:07 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:07 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:07.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:07.786+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:07.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:08 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:08 np0005592159 ceph-mon[77081]: Health check update: 108 slow ops, oldest one blocked for 6498 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:08.791+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:09.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:09 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:09.783+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 108 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:09.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:10.801+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:10 np0005592159 ceph-mon[77081]: 108 slow requests (by type [ 'delayed' : 108 ] most affected pool [ 'vms' : 66 ])
Jan 22 10:25:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:11.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:11.777+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:11.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:12 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:12.758+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:13 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:13 np0005592159 ceph-mon[77081]: Health check update: 108 slow ops, oldest one blocked for 6503 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:13.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:13.734+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:13.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:14 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:14.734+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:15 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:15 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:15.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:15.693+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:15.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:16 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:16.714+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:17 np0005592159 podman[280210]: 2026-01-22 15:25:17.030834346 +0000 UTC m=+0.099606363 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:25:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:17.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:17 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:17.752+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:17.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:25:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2657378040' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:25:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:25:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2657378040' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:25:18 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:18 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6508 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:18.740+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:19.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:19 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:19.705+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:19.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:20.704+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:20 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:21.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:21.720+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:21.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:22 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:22.699+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:23 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:23 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6513 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:23.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:23.738+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:23.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:24 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:24 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:25:24 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:25:24 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:25:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:24.691+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:25.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:25.682+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:25 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:25 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:25.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:26.651+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:26 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:27.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:27.701+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:27 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:27.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:28.700+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:28 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:28 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6518 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:29 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:25:29 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 20K writes, 109K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 20K writes, 20K syncs, 1.00 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1719 writes, 9797 keys, 1719 commit groups, 1.0 writes per commit group, ingest: 16.28 MB, 0.03 MB/s#012Interval WAL: 1719 writes, 1719 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     54.4      2.14              0.43        71    0.030       0      0       0.0       0.0#012  L6      1/0   10.09 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.9    116.7    101.0      6.76              2.31        70    0.097    742K    40K       0.0       0.0#012 Sum      1/0   10.09 MB   0.0      0.8     0.1      0.7       0.8      0.1       0.0   6.9     88.7     89.8      8.90              2.74       141    0.063    742K    40K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.8     37.7     37.9      1.99              0.23        12    0.166     89K   4955       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    116.7    101.0      6.76              2.31        70    0.097    742K    40K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     54.5      2.13              0.43        70    0.030       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6600.0 total, 600.0 interval#012Flush(GB): cumulative 0.114, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.78 GB write, 0.12 MB/s write, 0.77 GB read, 0.12 MB/s read, 8.9 seconds#012Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 2.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 83.39 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000558 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4357,78.87 MB,25.9432%) FilterBlock(141,2.02 MB,0.663491%) IndexBlock(141,2.50 MB,0.823397%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:25:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:29.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:29.701+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:29 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:29.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:30.718+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:31 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:31 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:25:31 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:25:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:31.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:31.721+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:31.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:32 np0005592159 podman[280474]: 2026-01-22 15:25:32.047286137 +0000 UTC m=+0.089824763 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:25:32 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:32.718+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:33 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:33 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6523 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:33.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:33.752+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:33.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:34 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:34.725+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:35 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:35.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:35.776+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:35.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:36 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:36.805+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:37 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:37.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:37.793+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:37.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:38 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:38 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6528 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:38.789+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:39 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:39.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:39.745+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:39.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:40.731+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:41 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:41.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:41.772+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:41.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:42 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:42 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:42.749+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:43 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:43 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6533 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:43.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:43.797+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:43.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:44 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:44.786+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:45 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:45.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:45.797+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:45.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:46.819+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:47 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:25:47.263 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:25:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:25:47.264 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:25:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:25:47.264 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:25:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:47.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:47.797+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:47.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:48 np0005592159 podman[280501]: 2026-01-22 15:25:48.075150456 +0000 UTC m=+0.125506150 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:25:48 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:48 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:48 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6538 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:48.769+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:49 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:49.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:49.763+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:49.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:50 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:50 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:50.748+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:51 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:51.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:51.737+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:51.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:52 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:52.690+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:53.650+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:53.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:53 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:53 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6543 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:53.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #223. Immutable memtables: 0.
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.083891) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 143] Flushing memtable with next log file: 223
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554083995, "job": 143, "event": "flush_started", "num_memtables": 1, "num_entries": 1928, "num_deletes": 449, "total_data_size": 3296326, "memory_usage": 3359256, "flush_reason": "Manual Compaction"}
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 143] Level-0 flush table #224: started
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554150028, "cf_name": "default", "job": 143, "event": "table_file_creation", "file_number": 224, "file_size": 2150401, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 108052, "largest_seqno": 109975, "table_properties": {"data_size": 2143190, "index_size": 3576, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 23099, "raw_average_key_size": 22, "raw_value_size": 2125694, "raw_average_value_size": 2088, "num_data_blocks": 155, "num_entries": 1018, "num_filter_entries": 1018, "num_deletions": 449, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095438, "oldest_key_time": 1769095438, "file_creation_time": 1769095554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 224, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 143] Flush lasted 66189 microseconds, and 10398 cpu microseconds.
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.150094) [db/flush_job.cc:967] [default] [JOB 143] Level-0 flush table #224: 2150401 bytes OK
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.150120) [db/memtable_list.cc:519] [default] Level-0 commit table #224 started
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.205351) [db/memtable_list.cc:722] [default] Level-0 commit table #224: memtable #1 done
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.205392) EVENT_LOG_v1 {"time_micros": 1769095554205384, "job": 143, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.205415) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 143] Try to delete WAL files size 3286617, prev total WAL file size 3286617, number of live WAL files 2.
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000220.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.206484) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039323837' seq:72057594037927935, type:22 .. '7061786F730039353339' seq:0, type:0; will stop at (end)
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 144] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 143 Base level 0, inputs: [224(2100KB)], [222(10MB)]
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554206541, "job": 144, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [224], "files_L6": [222], "score": -1, "input_data_size": 12734454, "oldest_snapshot_seqno": -1}
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 144] Generated table #225: 14172 keys, 10835589 bytes, temperature: kUnknown
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554459951, "cf_name": "default", "job": 144, "event": "table_file_creation", "file_number": 225, "file_size": 10835589, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10759977, "index_size": 39083, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35461, "raw_key_size": 390614, "raw_average_key_size": 27, "raw_value_size": 10520563, "raw_average_value_size": 742, "num_data_blocks": 1404, "num_entries": 14172, "num_filter_entries": 14172, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095554, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 225, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.460272) [db/compaction/compaction_job.cc:1663] [default] [JOB 144] Compacted 1@0 + 1@6 files to L6 => 10835589 bytes
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.617648) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 50.2 rd, 42.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 10.1 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(11.0) write-amplify(5.0) OK, records in: 15083, records dropped: 911 output_compression: NoCompression
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.617685) EVENT_LOG_v1 {"time_micros": 1769095554617672, "job": 144, "event": "compaction_finished", "compaction_time_micros": 253502, "compaction_time_cpu_micros": 33148, "output_level": 6, "num_output_files": 1, "total_output_size": 10835589, "num_input_records": 15083, "num_output_records": 14172, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000224.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554618298, "job": 144, "event": "table_file_deletion", "file_number": 224}
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000222.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095554620136, "job": 144, "event": "table_file_deletion", "file_number": 222}
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.206405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.620284) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.620296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.620301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.620356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:25:54.620372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:25:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:54.667+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:25:55 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:55.697+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:55.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:55.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:56 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:56.737+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:57 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:57.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:57.781+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:25:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:57.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:25:58 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:58 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6548 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:25:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:58.818+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:59 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:25:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:25:59.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:25:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:25:59.826+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:25:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:25:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:25:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:25:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:25:59.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:25:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:00 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:00.864+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:01.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:01 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:01 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:01.872+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:01.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:02.826+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:02 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:03 np0005592159 podman[280586]: 2026-01-22 15:26:03.022132658 +0000 UTC m=+0.071149338 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:26:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:03.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:03.851+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:26:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:03.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:26:04 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:04 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6553 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:04.885+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:05 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:05.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:05.917+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:05.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:06 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:06.917+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:07 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:07.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:07.884+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:07.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:08 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:08 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6558 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:08.858+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:09 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:09 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:09.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:09.888+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:26:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:09.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:26:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:10 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:10.877+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:11 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:11.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:11.830+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:11.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:12 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:12.809+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:13 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:13 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6563 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:13.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:13.831+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:13.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:14 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:14.834+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:15 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:15.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:15.838+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:15.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:16 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:16.807+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:17 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:17.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:17.831+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:17.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:18 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:18 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6568 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:18.851+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:19 np0005592159 podman[280664]: 2026-01-22 15:26:19.024285489 +0000 UTC m=+0.080356552 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 10:26:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:19.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:19.891+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:26:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:19.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:26:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:20 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:20.930+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:21 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:26:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:21.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:26:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:21.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:21.950+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:22.961+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:23 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:23.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:23.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:23.960+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:24 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:24 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6573 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:24 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:25.011+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:25 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:25.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:25.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:26.532+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:26 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:27 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:27.575+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:27.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:27.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:28.529+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:28 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:28 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6578 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:28 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:29.573+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:29.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:29.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:30.597+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:30 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:26:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.5 total, 600.0 interval#012Cumulative writes: 13K writes, 43K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 13K writes, 4531 syncs, 3.00 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 747 writes, 1292 keys, 747 commit groups, 1.0 writes per commit group, ingest: 0.52 MB, 0.00 MB/s#012Interval WAL: 747 writes, 310 syncs, 2.41 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:26:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:31.605+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:31.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:31 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:31.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:32.576+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:33 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:33 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6583 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:26:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:33 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:26:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:33.567+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:33.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:33.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:34 np0005592159 podman[280879]: 2026-01-22 15:26:34.005846168 +0000 UTC m=+0.062925880 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 22 10:26:34 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:34.577+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:35 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:35.534+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:35.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:35.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:36 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:36.574+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:37 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:37.605+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 164 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:37.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:37.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:38 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:38 np0005592159 ceph-mon[77081]: Health check update: 164 slow ops, oldest one blocked for 6588 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:38.568+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:39 np0005592159 ceph-mon[77081]: 164 slow requests (by type [ 'delayed' : 164 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:26:39 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:39 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:26:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:39.530+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:39.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:39.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:40 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:40 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:40.541+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:41.510+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:41 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:41 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:41.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:41.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:42.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:43.542+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:43.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:43 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:43 np0005592159 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 6593 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:43.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:44.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:44 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:45.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:45.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:45 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:45.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:46.558+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:46 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:26:47.265 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:26:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:26:47.265 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:26:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:26:47.265 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:26:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:47.534+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:47.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:26:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:47.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:26:47 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:47 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:47 np0005592159 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 6598 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:48.530+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:49 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:26:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:49.520+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:49.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:49.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:50 np0005592159 podman[280955]: 2026-01-22 15:26:50.02033431 +0000 UTC m=+0.083650960 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:26:50 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:50.534+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:51 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:51.503+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:51.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:51.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:52 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:52.530+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:53 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:53 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 6603 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:53.503+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:53.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:26:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:53.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:26:54 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:54.517+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:26:55 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:55.509+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:26:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:55.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:26:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:55.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:56.498+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:57 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:57.497+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:57.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:26:57.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:26:58 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:58 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:58.510+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:59 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 6608 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:26:59 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:26:59.512+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:26:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:26:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:26:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:26:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:26:59.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:00.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:00 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:00.552+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:01.561+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:01.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:02.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:02 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:02.599+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:03 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:03 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 6613 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:03.615+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:03.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:04.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:04 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:04.574+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:05 np0005592159 podman[281039]: 2026-01-22 15:27:05.020581711 +0000 UTC m=+0.075398201 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 22 10:27:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:05 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:05.607+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:05.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:06.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:06 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:06.600+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:07 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:07.630+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:07.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:08.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:08 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:08 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 6618 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:08.620+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:09 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:09.650+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:09.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:10.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:10 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:10.693+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:11 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:11.738+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:11.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:12.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:12 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:12.781+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:13 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:13 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 6623 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:13.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:13.822+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:27:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:14.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:27:14 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:14.848+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:15 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:15.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:15.820+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:16.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:16.798+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:16 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:17.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:17.797+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 38 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:17 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:17 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:18.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:18.838+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:18 np0005592159 ceph-mon[77081]: 38 slow requests (by type [ 'delayed' : 38 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:27:18 np0005592159 ceph-mon[77081]: Health check update: 38 slow ops, oldest one blocked for 6628 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:19.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:19.833+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:20.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:20 np0005592159 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:20.795+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:21 np0005592159 podman[281117]: 2026-01-22 15:27:21.02764103 +0000 UTC m=+0.082728535 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 22 10:27:21 np0005592159 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:21 np0005592159 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:21.767+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:21.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:22.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:22 np0005592159 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:22.779+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:23 np0005592159 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:23 np0005592159 ceph-mon[77081]: Health check update: 80 slow ops, oldest one blocked for 6633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:23.788+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:23.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:24.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:24 np0005592159 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:24.817+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:25 np0005592159 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:25.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:25.845+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:26.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:26.813+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 80 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:27:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:27.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:27:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:27.843+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:27:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:28.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:27:28 np0005592159 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:28 np0005592159 ceph-mon[77081]: 80 slow requests (by type [ 'delayed' : 80 ] most affected pool [ 'vms' : 51 ])
Jan 22 10:27:28 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:28 np0005592159 ceph-mon[77081]: Health check update: 80 slow ops, oldest one blocked for 6638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:28.891+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:29 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:29.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:29.928+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:30.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:30 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:30.962+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:31.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:31.939+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:32 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:32.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:32.970+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:33 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:33 np0005592159 ceph-mon[77081]: Health check update: 78 slow ops, oldest one blocked for 6643 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:33.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:34.001+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:34 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:34.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:35.006+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:35 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:35.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:36 np0005592159 podman[281203]: 2026-01-22 15:27:36.008107745 +0000 UTC m=+0.069855414 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:27:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:36.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:36.052+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:36 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:37.034+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:37 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:37.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:38.005+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:38.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:38 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:38 np0005592159 ceph-mon[77081]: Health check update: 78 slow ops, oldest one blocked for 6648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:38.995+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:39 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:39.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:39.986+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:40.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:40 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:41.034+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:41.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:42.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:42 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:42.077+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:43 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:43 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:43.052+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:43.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:43 np0005592159 ceph-mon[77081]: Health check update: 78 slow ops, oldest one blocked for 6653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:43 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:27:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:27:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:44.032+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:27:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:44.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:27:44 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:45.054+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:45.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:45 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:46.017+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:46.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:47 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:47.043+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:27:47.266 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:27:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:27:47.267 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:27:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:27:47.267 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:27:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:47.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:48 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:48 np0005592159 ceph-mon[77081]: Health check update: 78 slow ops, oldest one blocked for 6658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:48.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:48.071+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:49.036+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:49 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:49.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:50.055+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:50.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:50 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:50 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:27:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:51.058+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:51 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:51.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:52 np0005592159 podman[281464]: 2026-01-22 15:27:52.055188023 +0000 UTC m=+0.102237763 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:27:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:52.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:52.079+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:52 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:53.096+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:53 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:53 np0005592159 ceph-mon[77081]: Health check update: 78 slow ops, oldest one blocked for 6663 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:27:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:53.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:27:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:54.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:54.088+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:54 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:55.138+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:55 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:27:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:55.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:56.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:56.172+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:56 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:57.156+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 78 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:57 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:57.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:27:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:27:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:27:58.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:27:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:58.197+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:27:58 np0005592159 ceph-mon[77081]: 78 slow requests (by type [ 'delayed' : 78 ] most affected pool [ 'vms' : 49 ])
Jan 22 10:27:58 np0005592159 ceph-mon[77081]: Health check update: 78 slow ops, oldest one blocked for 6668 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:27:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:27:59.149+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:27:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:27:59 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:27:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:27:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:27:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:27:59.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:00.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:00.117+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:00 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:01.138+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:01 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:01.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:02.122+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:02 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:03.087+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:03 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:03 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6673 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:03.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:04.076+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:04.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:04 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:05.066+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:05 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:05 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:05.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:06.094+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:06.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:06 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:07 np0005592159 podman[281499]: 2026-01-22 15:28:07.003164465 +0000 UTC m=+0.054873296 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 22 10:28:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:07.065+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:07.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:08.064+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:08.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:08 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:09.064+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:09 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6678 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:09 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:09.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:10.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:10.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:10 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:11.035+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:11 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:11.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:12.053+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:12.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:12 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:13.100+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:13.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:13 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:13 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6683 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:14.082+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:14.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:15.034+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:15 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:15 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:15.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:16.056+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:16.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:16 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:16 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:17.053+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:17 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:17.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:18.010+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:18.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:19.017+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:19 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:19 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6688 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:19.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:20.018+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:20.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:21.019+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:21 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:21 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:21.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:22.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:22.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:22 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:23.022+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:23 np0005592159 podman[281577]: 2026-01-22 15:28:23.022790618 +0000 UTC m=+0.075666273 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 22 10:28:23 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6693 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:23.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:24.057+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:24.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:24 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:24 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:25.049+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:25.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:25 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:26.005+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:26.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:27.004+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:27 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:27 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:27.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:28.038+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:28.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:28 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:28 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6698 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:29.029+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:29 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:29 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:29.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:29.982+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:30.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:30 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:31.021+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:31 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:31.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:31.976+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:32.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:32 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:33.008+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:33.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:33.974+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:34.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:34 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:34 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6703 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:34.960+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:35 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:35.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:35.932+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:36.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:36 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:36 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:36.970+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:37.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:37 np0005592159 podman[281660]: 2026-01-22 15:28:37.987253498 +0000 UTC m=+0.050659882 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 10:28:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:38.001+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:38.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:38 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:38 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:39.019+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:39 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:39.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:40.032+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:40.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:40.986+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:40 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:40 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:41.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:42.026+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:42.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:42 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:43.029+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #226. Immutable memtables: 0.
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.496630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 145] Flushing memtable with next log file: 226
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723496689, "job": 145, "event": "flush_started", "num_memtables": 1, "num_entries": 2692, "num_deletes": 542, "total_data_size": 4859953, "memory_usage": 4936368, "flush_reason": "Manual Compaction"}
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 145] Level-0 flush table #227: started
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723529425, "cf_name": "default", "job": 145, "event": "table_file_creation", "file_number": 227, "file_size": 3165791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 109980, "largest_seqno": 112667, "table_properties": {"data_size": 3155857, "index_size": 5403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 31224, "raw_average_key_size": 22, "raw_value_size": 3131924, "raw_average_value_size": 2304, "num_data_blocks": 228, "num_entries": 1359, "num_filter_entries": 1359, "num_deletions": 542, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095554, "oldest_key_time": 1769095554, "file_creation_time": 1769095723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 227, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 145] Flush lasted 32831 microseconds, and 9196 cpu microseconds.
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.529468) [db/flush_job.cc:967] [default] [JOB 145] Level-0 flush table #227: 3165791 bytes OK
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.529490) [db/memtable_list.cc:519] [default] Level-0 commit table #227 started
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.533395) [db/memtable_list.cc:722] [default] Level-0 commit table #227: memtable #1 done
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.533419) EVENT_LOG_v1 {"time_micros": 1769095723533412, "job": 145, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.533444) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 145] Try to delete WAL files size 4846747, prev total WAL file size 4846747, number of live WAL files 2.
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000223.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.534721) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035323836' seq:72057594037927935, type:22 .. '6C6F676D0035353339' seq:0, type:0; will stop at (end)
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 146] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 145 Base level 0, inputs: [227(3091KB)], [225(10MB)]
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723534750, "job": 146, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [227], "files_L6": [225], "score": -1, "input_data_size": 14001380, "oldest_snapshot_seqno": -1}
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 146] Generated table #228: 14432 keys, 13749815 bytes, temperature: kUnknown
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723837940, "cf_name": "default", "job": 146, "event": "table_file_creation", "file_number": 228, "file_size": 13749815, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13669490, "index_size": 43156, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36101, "raw_key_size": 395987, "raw_average_key_size": 27, "raw_value_size": 13422707, "raw_average_value_size": 930, "num_data_blocks": 1574, "num_entries": 14432, "num_filter_entries": 14432, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095723, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 228, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.838202) [db/compaction/compaction_job.cc:1663] [default] [JOB 146] Compacted 1@0 + 1@6 files to L6 => 13749815 bytes
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.841542) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 46.2 rd, 45.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 10.3 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(8.8) write-amplify(4.3) OK, records in: 15531, records dropped: 1099 output_compression: NoCompression
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.841568) EVENT_LOG_v1 {"time_micros": 1769095723841556, "job": 146, "event": "compaction_finished", "compaction_time_micros": 303274, "compaction_time_cpu_micros": 30786, "output_level": 6, "num_output_files": 1, "total_output_size": 13749815, "num_input_records": 15531, "num_output_records": 14432, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000227.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723842342, "job": 146, "event": "table_file_deletion", "file_number": 227}
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000225.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095723844102, "job": 146, "event": "table_file_deletion", "file_number": 225}
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.534646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.844142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.844146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.844148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.844150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:43 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:43.844152) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:43.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:43.982+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:44.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:44 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:44 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6713 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:44 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:45.003+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:45 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:45.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:46.017+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:46.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #229. Immutable memtables: 0.
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.884756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 147] Flushing memtable with next log file: 229
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726884852, "job": 147, "event": "flush_started", "num_memtables": 1, "num_entries": 308, "num_deletes": 258, "total_data_size": 128067, "memory_usage": 135144, "flush_reason": "Manual Compaction"}
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 147] Level-0 flush table #230: started
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726894378, "cf_name": "default", "job": 147, "event": "table_file_creation", "file_number": 230, "file_size": 83592, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 112672, "largest_seqno": 112975, "table_properties": {"data_size": 81614, "index_size": 141, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5312, "raw_average_key_size": 18, "raw_value_size": 77703, "raw_average_value_size": 274, "num_data_blocks": 6, "num_entries": 283, "num_filter_entries": 283, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095723, "oldest_key_time": 1769095723, "file_creation_time": 1769095726, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 230, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 147] Flush lasted 9646 microseconds, and 1318 cpu microseconds.
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.894421) [db/flush_job.cc:967] [default] [JOB 147] Level-0 flush table #230: 83592 bytes OK
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.894442) [db/memtable_list.cc:519] [default] Level-0 commit table #230 started
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.922356) [db/memtable_list.cc:722] [default] Level-0 commit table #230: memtable #1 done
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.922421) EVENT_LOG_v1 {"time_micros": 1769095726922408, "job": 147, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.922455) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 147] Try to delete WAL files size 125797, prev total WAL file size 125797, number of live WAL files 2.
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000226.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.923179) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039353338' seq:72057594037927935, type:22 .. '7061786F730039373930' seq:0, type:0; will stop at (end)
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 148] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 147 Base level 0, inputs: [230(81KB)], [228(13MB)]
Jan 22 10:28:46 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095726923262, "job": 148, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [230], "files_L6": [228], "score": -1, "input_data_size": 13833407, "oldest_snapshot_seqno": -1}
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 148] Generated table #231: 14192 keys, 12058708 bytes, temperature: kUnknown
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727021649, "cf_name": "default", "job": 148, "event": "table_file_creation", "file_number": 231, "file_size": 12058708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11981266, "index_size": 40849, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35525, "raw_key_size": 391674, "raw_average_key_size": 27, "raw_value_size": 11739848, "raw_average_value_size": 827, "num_data_blocks": 1472, "num_entries": 14192, "num_filter_entries": 14192, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095726, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 231, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.022045) [db/compaction/compaction_job.cc:1663] [default] [JOB 148] Compacted 1@0 + 1@6 files to L6 => 12058708 bytes
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.025764) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.3 rd, 122.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 13.1 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(309.7) write-amplify(144.3) OK, records in: 14715, records dropped: 523 output_compression: NoCompression
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.025781) EVENT_LOG_v1 {"time_micros": 1769095727025774, "job": 148, "event": "compaction_finished", "compaction_time_micros": 98566, "compaction_time_cpu_micros": 41832, "output_level": 6, "num_output_files": 1, "total_output_size": 12058708, "num_input_records": 14715, "num_output_records": 14192, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000230.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727026243, "job": 148, "event": "table_file_deletion", "file_number": 230}
Jan 22 10:28:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:47.026+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000228.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095727028717, "job": 148, "event": "table_file_deletion", "file_number": 228}
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:46.923071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.029406) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.029413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.029416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.029419) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:28:47.029421) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:28:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:28:47.268 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:28:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:28:47.268 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:28:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:28:47.268 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:28:47 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:47.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:48.019+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:48.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:49 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:49 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:49 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:49.069+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:49.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:50.100+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:50.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:50 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:51.144+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:51 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:51 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:28:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:28:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:51.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:52.159+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:52.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:53.175+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:53.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:28:54 np0005592159 podman[281869]: 2026-01-22 15:28:54.026410099 +0000 UTC m=+0.086215951 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:28:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:28:54 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:28:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:28:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:54.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:54.175+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:55.148+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:55 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:55 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:55.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:56.165+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:56.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:56 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:28:56 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:56 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:57.170+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:57 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:57 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:28:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:57.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:28:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:28:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:28:58.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:28:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:58.206+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:28:59.191+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:28:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:59 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:28:59 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:28:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:28:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:28:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:28:59.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:00.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:00.226+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:00 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:01.207+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:01.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:01 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:02.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:02.229+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:02 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:29:02 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:29:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:03.203+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:03.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:04.154+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:04 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6732 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:04 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:05.157+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:05.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:05 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:06.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:06.181+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:06 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:07.142+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:07 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:07 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:07.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:08.128+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:08.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:08 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:08 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:09 np0005592159 podman[281954]: 2026-01-22 15:29:09.015418441 +0000 UTC m=+0.062031682 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:29:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:09.174+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:09.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:10 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:10.168+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:10.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:11.217+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:11 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:11 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:11 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:11.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:12.177+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:29:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:12.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:29:12 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:13.135+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:13 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:29:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:13.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:29:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:14.160+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:14.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:14 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:14 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:15.195+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:15.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:15 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:16.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:16.220+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:16 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:17 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6748 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:17 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:17.238+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:17.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:18 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:29:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:18.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:29:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:18.242+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:19.247+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:19 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:19.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:20.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:20.246+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:20 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:21.278+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:21 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:21.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:22 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:22 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:22.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:22.293+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:22 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:23.248+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:23 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6752 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:23 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:23.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:29:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:24.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:29:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:24.264+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:24 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:25 np0005592159 podman[282036]: 2026-01-22 15:29:25.037341918 +0000 UTC m=+0.094226494 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 22 10:29:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:25.258+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:25.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:25 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:26.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:26.227+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:26 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:26 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:27.246+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:27.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:28.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:28.252+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:28 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:29.234+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:29 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:29 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6757 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:29.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:30.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:30.238+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:30 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:30 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:31.248+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:31 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:31 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:31.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:32.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:32.271+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:33 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:33.319+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:33.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:34.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:34 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:34 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6762 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:34.364+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:35.405+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:35 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:35.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:36.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:36.420+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:36 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:36 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:36 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:37.433+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:37.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:38.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:38.416+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:38 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:39.418+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:39.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:40 np0005592159 podman[282120]: 2026-01-22 15:29:40.013257141 +0000 UTC m=+0.068643127 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 22 10:29:40 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6767 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:40.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:40.427+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:41.382+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:41 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:41 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:41 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:41.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:42.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:42 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:42.430+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:42 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:43.434+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:43 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:43 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6772 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:43 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:43.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:44.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:44.434+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:44 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:45.453+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:45 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:45.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:46.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:46.442+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:29:47.269 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:29:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:29:47.269 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:29:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:29:47.269 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:29:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:47.490+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:47 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:47.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:48.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:48.463+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:48 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:48 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6777 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:48 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:49.450+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:49.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:50.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:50 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:50.480+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:51 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:51.503+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:51.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:52.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:52.500+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:52 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:53.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:53 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:53 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6782 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:53.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:54.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:54 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:54.594+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:55.548+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:55 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:55.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:56 np0005592159 podman[282197]: 2026-01-22 15:29:56.102751675 +0000 UTC m=+0.159681006 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 22 10:29:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:29:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:56.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:29:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:56.525+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:56 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:56 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:57.478+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:57 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:29:57 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:57.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:29:58.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:29:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:58.449+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:58 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:29:58 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:29:59.414+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:29:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:59 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:29:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:29:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:29:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:29:59.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:30:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:00.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:30:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:00.462+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:00 np0005592159 ceph-mon[77081]: Health detail: HEALTH_WARN 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops
Jan 22 10:30:00 np0005592159 ceph-mon[77081]: [WRN] SLOW_OPS: 172 slow ops, oldest one blocked for 6788 sec, osd.2 has slow ops
Jan 22 10:30:00 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:01.475+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:01.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:02 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:30:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:02.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:30:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:02.493+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:03 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:03.493+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:03.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:04 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6792 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:04 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:30:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:04.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:30:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:04.529+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:05 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:30:05 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:05 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:30:05 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:05.527+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:05.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:06.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:06 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:06.569+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:07 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:07.583+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:07 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:07.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:08.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:08.609+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:09 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:09 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6798 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:09.634+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:09.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:10 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:10.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:10 np0005592159 podman[282384]: 2026-01-22 15:30:10.661294897 +0000 UTC m=+0.078726693 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 22 10:30:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:10.661+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:11 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:30:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:11.640+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:11.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:12.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:12 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:12 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:12.608+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 172 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:13 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:13.649+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:13.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:14.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:14 np0005592159 ceph-mon[77081]: 172 slow requests (by type [ 'delayed' : 172 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:30:14 np0005592159 ceph-mon[77081]: Health check update: 172 slow ops, oldest one blocked for 6803 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:14.631+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:15 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:15.679+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:15.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:16.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:16 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:16.701+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:17 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:17 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:17.711+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:18.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:18.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:18 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:18 np0005592159 ceph-mon[77081]: Health check update: 75 slow ops, oldest one blocked for 6808 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:18.680+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:19.685+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:19 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:20.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:20.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:20.658+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:20 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:20 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:21.642+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:21 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:22.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:22.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:22 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:22.614+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:23 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:23.596+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:24.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:24 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:24 np0005592159 ceph-mon[77081]: Health check update: 75 slow ops, oldest one blocked for 6813 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:24.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:24.553+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:25 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:25.508+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:26.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:26.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:26 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:26 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:26.528+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:27 np0005592159 podman[282488]: 2026-01-22 15:30:27.019406027 +0000 UTC m=+0.082301079 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 10:30:27 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:27.510+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:28.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:28.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:28.498+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:28 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:28 np0005592159 ceph-mon[77081]: Health check update: 75 slow ops, oldest one blocked for 6818 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:29.483+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:29 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:29 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:30.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:30.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:30.435+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:30 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:31.470+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:31 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:32.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:32.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:32.452+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:32 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:32 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:33.422+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:34.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:34 np0005592159 ceph-mon[77081]: Health check update: 75 slow ops, oldest one blocked for 6822 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:34 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:34.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:34.411+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:35 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:35.391+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:36.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:36.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:36.342+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:37.376+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:37 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:37 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:38.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:30:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:38.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:30:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 65 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 65 slow requests (by type [ 'delayed' : 65 ] most affected pool [ 'vms' : 41 ])
Jan 22 10:30:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:38.362+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 65 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:38 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:38 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:38 np0005592159 ceph-mon[77081]: Health check update: 75 slow ops, oldest one blocked for 6827 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:39.350+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:39 np0005592159 ceph-mon[77081]: 65 slow requests (by type [ 'delayed' : 65 ] most affected pool [ 'vms' : 41 ])
Jan 22 10:30:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:40.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:30:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:40.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:30:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:40.317+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:40 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:40 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:41 np0005592159 podman[282572]: 2026-01-22 15:30:41.018375306 +0000 UTC m=+0.073550077 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:30:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:41.335+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:41 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:42.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:42.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:42.333+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 75 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:42 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:43 np0005592159 ceph-mon[77081]: 75 slow requests (by type [ 'delayed' : 75 ] most affected pool [ 'vms' : 48 ])
Jan 22 10:30:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:43.381+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:44.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:44 np0005592159 ceph-mon[77081]: Health check update: 75 slow ops, oldest one blocked for 6832 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:44 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:44.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:44.363+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:45 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:45.359+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:46.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:46.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:46.396+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:46 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:30:47.269 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:30:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:30:47.270 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:30:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:30:47.270 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:30:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:47.418+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:47 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:48.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:48.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:48.433+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:48 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:48 np0005592159 ceph-mon[77081]: Health check update: 173 slow ops, oldest one blocked for 6837 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:48 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:49.403+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:49 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:30:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:50.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:30:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:50.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:50.453+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:50 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:51.467+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:51 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:52.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:52.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:52.466+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:53 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:53.419+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:54.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:54 np0005592159 ceph-mon[77081]: Health check update: 173 slow ops, oldest one blocked for 6842 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:54 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:54.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:54.451+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:55 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:55.427+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:56.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:30:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:56.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:30:56 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:56.395+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:57.362+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:57 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:57 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:30:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 22 10:30:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:30:58.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 22 10:30:58 np0005592159 podman[282651]: 2026-01-22 15:30:58.050185779 +0000 UTC m=+0.103176131 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:30:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:30:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:30:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:30:58.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:30:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:58.395+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 173 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:58 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:30:58 np0005592159 ceph-mon[77081]: Health check update: 173 slow ops, oldest one blocked for 6847 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:30:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:30:59.409+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:30:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:31:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:00.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:00 np0005592159 ceph-mon[77081]: 173 slow requests (by type [ 'delayed' : 173 ] most affected pool [ 'vms' : 101 ])
Jan 22 10:31:00 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:31:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:00.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:00.385+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:31:01 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:31:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:01.365+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 140 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:31:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:02.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:02 np0005592159 ceph-mon[77081]: 140 slow requests (by type [ 'delayed' : 140 ] most affected pool [ 'vms' : 86 ])
Jan 22 10:31:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:02.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:02.388+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:03 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:03.428+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:04.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:04 np0005592159 ceph-mon[77081]: Health check update: 140 slow ops, oldest one blocked for 6852 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:04 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:04.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:04.444+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:05.436+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:05 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:06.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:06.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:06.440+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:06 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:07.398+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:07 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:08.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:08 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:08 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:08.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:08.445+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:09.477+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:09 np0005592159 ceph-mon[77081]: Health check update: 5 slow ops, oldest one blocked for 6857 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:09 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:10.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:10.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:10.466+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:11 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:11 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:11 np0005592159 podman[282798]: 2026-01-22 15:31:11.280685347 +0000 UTC m=+0.050527537 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:31:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:11.442+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:12.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:12 np0005592159 podman[283046]: 2026-01-22 15:31:12.259362158 +0000 UTC m=+0.078510668 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef)
Jan 22 10:31:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:31:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:31:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:12 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:12.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:12 np0005592159 podman[283046]: 2026-01-22 15:31:12.392717196 +0000 UTC m=+0.211865686 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 22 10:31:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:12.406+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:12 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:13 np0005592159 podman[283201]: 2026-01-22 15:31:13.201842882 +0000 UTC m=+0.071606366 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 10:31:13 np0005592159 podman[283201]: 2026-01-22 15:31:13.217017583 +0000 UTC m=+0.086781007 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 10:31:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:13.362+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:13 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:13 np0005592159 podman[283264]: 2026-01-22 15:31:13.46070026 +0000 UTC m=+0.057565794 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, version=2.2.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc.)
Jan 22 10:31:13 np0005592159 podman[283264]: 2026-01-22 15:31:13.471159737 +0000 UTC m=+0.068025281 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, release=1793, com.redhat.component=keepalived-container, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, description=keepalived for Ceph, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 22 10:31:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:14.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:14.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:14.379+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:14 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:14 np0005592159 ceph-mon[77081]: Health check update: 5 slow ops, oldest one blocked for 6862 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:31:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:14 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:31:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:15.407+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:15 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:16.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:16.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:16.364+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:16 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:17.410+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:17 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:17 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:18.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:18.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:18.368+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:18 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:18 np0005592159 ceph-mon[77081]: Health check update: 5 slow ops, oldest one blocked for 6867 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:19.339+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:19 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:19 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000052s ======
Jan 22 10:31:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:20.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 22 10:31:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:20.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:20.381+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:20 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:21.380+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:21 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:31:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:22.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:22.343+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:22.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:22 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:23 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:23.329+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:24.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:24 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:24 np0005592159 ceph-mon[77081]: Health check update: 5 slow ops, oldest one blocked for 6872 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:24.349+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:24.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:25 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:25.351+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:26.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:26.319+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:26 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:26.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:27 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:27.358+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:27 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:28.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:28 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:28.373+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:28.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:29 np0005592159 podman[283486]: 2026-01-22 15:31:29.117730441 +0000 UTC m=+0.157516638 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:31:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:29.358+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:29 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:29 np0005592159 ceph-mon[77081]: Health check update: 5 slow ops, oldest one blocked for 6877 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:30.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:30.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:30.388+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:30 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:31.363+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 5 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:31:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:31 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:32.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:32.369+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:32.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:32 np0005592159 ceph-mon[77081]: 5 slow requests (by type [ 'delayed' : 5 ] most affected pool [ 'vms' : 4 ])
Jan 22 10:31:32 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:33.414+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:33 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:33 np0005592159 ceph-mon[77081]: Health check update: 5 slow ops, oldest one blocked for 6882 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:34.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:34.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:34.383+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:34 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:35.414+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:35 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:36.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:36.383+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:36.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:36 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:37.377+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:38.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:38.331+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:38.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #232. Immutable memtables: 0.
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.418232) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 149] Flushing memtable with next log file: 232
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898418387, "job": 149, "event": "flush_started", "num_memtables": 1, "num_entries": 2759, "num_deletes": 543, "total_data_size": 5144947, "memory_usage": 5238032, "flush_reason": "Manual Compaction"}
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 149] Level-0 flush table #233: started
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898437767, "cf_name": "default", "job": 149, "event": "table_file_creation", "file_number": 233, "file_size": 2094155, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 112981, "largest_seqno": 115734, "table_properties": {"data_size": 2085806, "index_size": 3950, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 31336, "raw_average_key_size": 23, "raw_value_size": 2063839, "raw_average_value_size": 1570, "num_data_blocks": 166, "num_entries": 1314, "num_filter_entries": 1314, "num_deletions": 543, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095727, "oldest_key_time": 1769095727, "file_creation_time": 1769095898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 233, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 149] Flush lasted 19595 microseconds, and 10412 cpu microseconds.
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.437842) [db/flush_job.cc:967] [default] [JOB 149] Level-0 flush table #233: 2094155 bytes OK
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.437865) [db/memtable_list.cc:519] [default] Level-0 commit table #233 started
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.440004) [db/memtable_list.cc:722] [default] Level-0 commit table #233: memtable #1 done
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.440018) EVENT_LOG_v1 {"time_micros": 1769095898440014, "job": 149, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.440038) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 149] Try to delete WAL files size 5131403, prev total WAL file size 5139670, number of live WAL files 2.
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000229.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.441166) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323538' seq:72057594037927935, type:22 .. '6D6772737461740033353130' seq:0, type:0; will stop at (end)
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 150] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 149 Base level 0, inputs: [233(2045KB)], [231(11MB)]
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898441194, "job": 150, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [233], "files_L6": [231], "score": -1, "input_data_size": 14152863, "oldest_snapshot_seqno": -1}
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 150] Generated table #234: 14483 keys, 11396376 bytes, temperature: kUnknown
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898538691, "cf_name": "default", "job": 150, "event": "table_file_creation", "file_number": 234, "file_size": 11396376, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11318992, "index_size": 40087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36229, "raw_key_size": 396990, "raw_average_key_size": 27, "raw_value_size": 11074489, "raw_average_value_size": 764, "num_data_blocks": 1444, "num_entries": 14483, "num_filter_entries": 14483, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769095898, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 234, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.538985) [db/compaction/compaction_job.cc:1663] [default] [JOB 150] Compacted 1@0 + 1@6 files to L6 => 11396376 bytes
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.540879) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.0 rd, 116.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.5 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(12.2) write-amplify(5.4) OK, records in: 15506, records dropped: 1023 output_compression: NoCompression
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.540899) EVENT_LOG_v1 {"time_micros": 1769095898540890, "job": 150, "event": "compaction_finished", "compaction_time_micros": 97577, "compaction_time_cpu_micros": 29150, "output_level": 6, "num_output_files": 1, "total_output_size": 11396376, "num_input_records": 15506, "num_output_records": 14483, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000233.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898541468, "job": 150, "event": "table_file_deletion", "file_number": 233}
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000231.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769095898543951, "job": 150, "event": "table_file_deletion", "file_number": 231}
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.441099) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.543987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.543991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.543992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.543993) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:31:38.543995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:31:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:39.305+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:39 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:39 np0005592159 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 6887 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:40.320+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:40.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:40.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:41.301+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:41 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:41 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:42 np0005592159 podman[283570]: 2026-01-22 15:31:42.007008905 +0000 UTC m=+0.064357193 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 22 10:31:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:42.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:42.339+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:42.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:43 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:43 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:43.322+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:43 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:43 np0005592159 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 6892 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:44.293+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:44.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:44.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:44 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:44 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:45.270+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:46 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:46.289+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:46.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:46.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:47.255+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:31:47.270 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:31:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:31:47.271 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:31:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:31:47.271 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:31:47 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:48.210+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:48.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:48 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:48 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:48.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:49.255+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:49 np0005592159 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 6897 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:49 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:50.247+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:50.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:50.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:51.282+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:51 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:52.324+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:52.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:52.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:52 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:53.348+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:54 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:54 np0005592159 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 6903 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:54.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:54.394+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:54.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:55 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:55 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:55.439+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:31:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:56.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:31:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:56.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:56.414+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:56 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:57.442+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:57 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:31:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:31:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:31:58.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:31:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:31:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:31:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:31:58.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:31:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:58.452+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:58 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:58 np0005592159 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 6907 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:31:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:31:59.447+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:31:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:31:59 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:32:00 np0005592159 podman[283650]: 2026-01-22 15:32:00.043129356 +0000 UTC m=+0.093573506 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:32:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:00.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:32:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:00.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:32:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:00.438+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:32:00 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:32:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:01.464+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:32:02 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:32:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:02.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:02.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:02.502+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:03.461+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:03 np0005592159 ceph-mon[77081]: 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:32:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:04.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:04.412+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:04.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:04 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:04 np0005592159 ceph-mon[77081]: Health check update: 59 slow ops, oldest one blocked for 6912 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:04 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:05.430+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:05 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:06.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:06.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:06.481+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:07 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:07.437+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:08 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:08.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:08.393+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:08.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:09 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:09 np0005592159 ceph-mon[77081]: Health check update: 159 slow ops, oldest one blocked for 6917 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:09.344+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:32:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:10.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:32:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:10.388+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:10.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:10 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:11.382+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:11 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:11 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:12.334+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:12.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:12.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:12 np0005592159 podman[283708]: 2026-01-22 15:32:12.509744466 +0000 UTC m=+0.094651585 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 22 10:32:12 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:13.363+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:14.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:14.390+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:32:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:14.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:32:14 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:14 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:14 np0005592159 ceph-mon[77081]: Health check update: 159 slow ops, oldest one blocked for 6922 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:15.388+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:16 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:32:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:16.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:32:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:16.383+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:16.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:17 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:17.404+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:18.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:32:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:18.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:32:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:18.453+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:32:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1535123889' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:32:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:32:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1535123889' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:32:18 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:19.489+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:20.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:20.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:20 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:20.470+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:20 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:20 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:20 np0005592159 ceph-mon[77081]: Health check update: 159 slow ops, oldest one blocked for 6927 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:21.424+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:22.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:22.401+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:22 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:22.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:22 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:23.441+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:23 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:32:23.895 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:32:23 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:32:23.897 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:32:23 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:23 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:23 np0005592159 ceph-mon[77081]: 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:32:23 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:32:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:24.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:32:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:24.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:32:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:24.490+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:25 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:25 np0005592159 ceph-mon[77081]: Health check update: 159 slow ops, oldest one blocked for 6933 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:25 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:32:25 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:32:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:25.501+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:26.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:26.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:26.515+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:27.552+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:27 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:32:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:28.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:32:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:28.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:28.578+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:29.540+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:30 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:30 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:30 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:30 np0005592159 ceph-mon[77081]: Health check update: 183 slow ops, oldest one blocked for 6938 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:30.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:30.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:30.583+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:31 np0005592159 podman[283893]: 2026-01-22 15:32:31.060208764 +0000 UTC m=+0.112336837 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:32:31 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:31 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:31.614+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:32 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:32.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:32.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:32.570+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:32 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:32:32.898 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:32:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:33 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:33 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:33.539+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:34 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:34 np0005592159 ceph-mon[77081]: Health check update: 183 slow ops, oldest one blocked for 6943 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:34.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:34.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:34.554+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:35 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:32:35 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:32:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:35.553+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:36.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:36 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:36.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:36.556+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:37 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:37.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:38.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:38.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:38.559+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:38 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:39.606+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:39 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:39 np0005592159 ceph-mon[77081]: Health check update: 183 slow ops, oldest one blocked for 6948 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:32:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:40.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:32:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:40.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:40.597+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:41 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:41.549+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:42.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:42.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:42 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:42 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:42.562+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:43 np0005592159 podman[284026]: 2026-01-22 15:32:43.027222059 +0000 UTC m=+0.087119217 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 22 10:32:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:43.526+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:43 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:44.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:44.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:44.565+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:44 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:44 np0005592159 ceph-mon[77081]: Health check update: 183 slow ops, oldest one blocked for 6953 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:45.613+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 84 ])
Jan 22 10:32:45 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:46.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:46.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:46.621+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:32:47.271 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:32:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:32:47.272 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:32:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:32:47.272 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:32:47 np0005592159 ceph-mon[77081]: 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 84 ])
Jan 22 10:32:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:47.661+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:48.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:48.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:48.682+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:49 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:49 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:49.719+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:50.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:50.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:50 np0005592159 ceph-mon[77081]: Health check update: 183 slow ops, oldest one blocked for 6958 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:50 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:50.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:51.652+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:52 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:52.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:52.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:52.637+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:53.645+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:53 np0005592159 ceph-mon[77081]: 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:32:53 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:54.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:54.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:54.628+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:54 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:54 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:54 np0005592159 ceph-mon[77081]: Health check update: 183 slow ops, oldest one blocked for 6963 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:55.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:56 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:56.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:56.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:56.722+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:57 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:57.699+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:58 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:32:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:32:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:32:58.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:32:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:32:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:32:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:32:58.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:32:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:58.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:59 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:32:59 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 6968 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:32:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:32:59.767+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:32:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:00.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:00 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:00.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:00.776+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:01 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:01 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:01.731+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:02 np0005592159 podman[284105]: 2026-01-22 15:33:02.025345948 +0000 UTC m=+0.087230159 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 22 10:33:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:02.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:02.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:02.705+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:03 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:03.704+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:04.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:04.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:04 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:04 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 6973 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:04 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:04.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:05.742+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:05 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:06.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:06.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:06.776+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:07.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:08 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:08.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:08 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:08.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:08.789+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:09.827+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:10 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:10 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:10 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 6978 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:10.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:10.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:10.852+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:11 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:11 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:11.849+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:12 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:12.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:12.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:12.895+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:13 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:13 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:13 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:13.906+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:13 np0005592159 podman[284195]: 2026-01-22 15:33:13.992463745 +0000 UTC m=+0.051164271 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 22 10:33:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:14.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:14.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:14 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 6983 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:14 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:14.918+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:15 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:15.963+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:16.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:16.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:16 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:16.924+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:17 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:17.931+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:18.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:18.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:33:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/129700801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:33:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:33:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/129700801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:33:18 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:18 np0005592159 ceph-mon[77081]: Health check update: 41 slow ops, oldest one blocked for 6988 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:18 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:18.949+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:19 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:19.956+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:20 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:20.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:20.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:21.006+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:21 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:21 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:21.988+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:22.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:22.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:23 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:23.002+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:23 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:24 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:24.005+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:24.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:24.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:25 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:25.011+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:25 np0005592159 ceph-mon[77081]: 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:33:25 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:26 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:26.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:26.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:26.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:26 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:26 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:27 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:27.028+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:28 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:28.068+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:28 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:28 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:28 np0005592159 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 6998 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:28 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:28.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:28 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:28.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:29 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:29.090+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:29 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:30 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:30.079+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:30.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:30.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:30 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:30 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:31 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:31.092+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:31 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:32 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:32.111+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:32.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:32.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:32 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:33 np0005592159 podman[284223]: 2026-01-22 15:33:33.017642965 +0000 UTC m=+0.080342087 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 22 10:33:33 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:33.138+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:34 np0005592159 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7003 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:34 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:34 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:34.094+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:34.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:34.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:35 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:35.069+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:35 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:36 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:36.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:36 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:36 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:33:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:36.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:36.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:37 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:37.048+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:37 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:37 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:33:37 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:33:38 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:38.029+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:38.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:38 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:38 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:38 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:38.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:39.077+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:39 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:39 np0005592159 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7008 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:39 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:40 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:40.103+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:40.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:40.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:40 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:41 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:41.134+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:41 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:42 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:42.121+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:42.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:42.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:43 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:43.082+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:43 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:33:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:33:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:44.079+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:44.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:44.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:44 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:44 np0005592159 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7013 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:44 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:45 np0005592159 podman[284486]: 2026-01-22 15:33:45.019751871 +0000 UTC m=+0.073915165 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 22 10:33:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:45.103+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:45 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:45 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:46.135+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:46 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:46.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:46.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:46 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:47.174+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:47 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:33:47.273 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:33:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:33:47.273 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:33:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:33:47.273 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:33:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:48.127+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:48 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:48 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:48.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:48.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:49.153+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:49 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:49 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:49 np0005592159 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7018 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:50.126+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:50 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:50.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:50 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:50.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:51.111+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:51 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:51 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:51 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:52.066+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:52 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:52.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:52.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:52 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:53.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:53 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #235. Immutable memtables: 0.
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.067622) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 151] Flushing memtable with next log file: 235
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034067688, "job": 151, "event": "flush_started", "num_memtables": 1, "num_entries": 2321, "num_deletes": 736, "total_data_size": 3742318, "memory_usage": 3824416, "flush_reason": "Manual Compaction"}
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 151] Level-0 flush table #236: started
Jan 22 10:33:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:54.080+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:54 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034121579, "cf_name": "default", "job": 151, "event": "table_file_creation", "file_number": 236, "file_size": 2442903, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 115739, "largest_seqno": 118055, "table_properties": {"data_size": 2434191, "index_size": 4181, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3461, "raw_key_size": 29850, "raw_average_key_size": 21, "raw_value_size": 2411596, "raw_average_value_size": 1777, "num_data_blocks": 178, "num_entries": 1357, "num_filter_entries": 1357, "num_deletions": 736, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769095898, "oldest_key_time": 1769095898, "file_creation_time": 1769096034, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 236, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 151] Flush lasted 54015 microseconds, and 6049 cpu microseconds.
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.121643) [db/flush_job.cc:967] [default] [JOB 151] Level-0 flush table #236: 2442903 bytes OK
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.121667) [db/memtable_list.cc:519] [default] Level-0 commit table #236 started
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.138776) [db/memtable_list.cc:722] [default] Level-0 commit table #236: memtable #1 done
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.138815) EVENT_LOG_v1 {"time_micros": 1769096034138806, "job": 151, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.138840) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 151] Try to delete WAL files size 3729811, prev total WAL file size 3729811, number of live WAL files 2.
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000232.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.139940) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035353338' seq:72057594037927935, type:22 .. '6C6F676D0035373931' seq:0, type:0; will stop at (end)
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 152] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 151 Base level 0, inputs: [236(2385KB)], [234(10MB)]
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034140012, "job": 152, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [236], "files_L6": [234], "score": -1, "input_data_size": 13839279, "oldest_snapshot_seqno": -1}
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 152] Generated table #237: 14351 keys, 11969715 bytes, temperature: kUnknown
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034362729, "cf_name": "default", "job": 152, "event": "table_file_creation", "file_number": 237, "file_size": 11969715, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11891833, "index_size": 40905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35909, "raw_key_size": 395183, "raw_average_key_size": 27, "raw_value_size": 11648381, "raw_average_value_size": 811, "num_data_blocks": 1472, "num_entries": 14351, "num_filter_entries": 14351, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096034, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 237, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.363268) [db/compaction/compaction_job.cc:1663] [default] [JOB 152] Compacted 1@0 + 1@6 files to L6 => 11969715 bytes
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.368404) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 62.1 rd, 53.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 10.9 +0.0 blob) out(11.4 +0.0 blob), read-write-amplify(10.6) write-amplify(4.9) OK, records in: 15840, records dropped: 1489 output_compression: NoCompression
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.368461) EVENT_LOG_v1 {"time_micros": 1769096034368440, "job": 152, "event": "compaction_finished", "compaction_time_micros": 222999, "compaction_time_cpu_micros": 35903, "output_level": 6, "num_output_files": 1, "total_output_size": 11969715, "num_input_records": 15840, "num_output_records": 14351, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000236.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034370098, "job": 152, "event": "table_file_deletion", "file_number": 236}
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000234.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096034372674, "job": 152, "event": "table_file_deletion", "file_number": 234}
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.139782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.372812) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.372821) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.372823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.372825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:33:54.372827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:33:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:54.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:54.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:54 np0005592159 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7023 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:55.091+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:55 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:55 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:55 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:56.100+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:56 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:56.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:33:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:56.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:33:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:57.101+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:57 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:57 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:58.121+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:58 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:33:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:33:58.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:33:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:33:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:33:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:33:58.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:33:58 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:33:58 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:33:59.083+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:59 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:33:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:59 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:33:59 np0005592159 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7028 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:33:59 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:00.047+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:00 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:00.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:00.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:01.090+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:01 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:01 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:02.060+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:02 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:02.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:02.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:02 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:02 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:03.093+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:03 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:03 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:03 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:04 np0005592159 podman[284567]: 2026-01-22 15:34:04.021484176 +0000 UTC m=+0.077767748 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 22 10:34:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:04.057+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:04 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:04.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:04.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:05.084+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:05 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:05 np0005592159 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7033 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:05 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:06.036+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:06 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:06.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:06.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:07.027+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:07 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:07 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:07 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:08.067+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:08 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:08.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:08.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:08 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:34:08.838 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:34:08 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:34:08.840 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:34:08 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:08 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:09.050+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:09 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:10.008+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:10 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:10 np0005592159 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7038 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:10 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:10.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:10.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:11.001+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:11 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:34:11.842 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:34:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:11.958+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:11 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:12 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:12.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:12.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:12.913+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:12 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:13 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:13 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:13.879+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 90 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:13 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 90 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 90 slow requests (by type [ 'delayed' : 90 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:14 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:14 np0005592159 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7043 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:14.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:14.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:14.863+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:14 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:15.895+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:15 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:15 np0005592159 ceph-mon[77081]: 90 slow requests (by type [ 'delayed' : 90 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:15 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:15 np0005592159 podman[284649]: 2026-01-22 15:34:15.990556446 +0000 UTC m=+0.054769926 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:34:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:34:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:16.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:34:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:34:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:16.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:34:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:16.956+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:16 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:17 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:17.969+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:17 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:18 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:34:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:18.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:34:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:34:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:18.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:34:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 e180: 3 total, 3 up, 3 in
Jan 22 10:34:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:18.986+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:18 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 43 ])
Jan 22 10:34:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:19 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:19 np0005592159 ceph-mon[77081]: Health check update: 184 slow ops, oldest one blocked for 7048 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:19.999+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 55 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:20 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 55 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 55 slow requests (by type [ 'delayed' : 55 ] most affected pool [ 'vms' : 34 ])
Jan 22 10:34:20 np0005592159 ceph-mon[77081]: 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 43 ])
Jan 22 10:34:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:20.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:20.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:20.958+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:20 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:21 np0005592159 ceph-mon[77081]: 55 slow requests (by type [ 'delayed' : 55 ] most affected pool [ 'vms' : 34 ])
Jan 22 10:34:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:21.991+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:21 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:22 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:22.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:22.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:23.027+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:23 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:23 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:23 np0005592159 ceph-mon[77081]: 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:34:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:24.047+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:24 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:24 np0005592159 ceph-mon[77081]: Health check update: 55 slow ops, oldest one blocked for 7053 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:24 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:24.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:24.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:25.097+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:25 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:25 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:26.050+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:26 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:26 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:26.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:26.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:27.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:27 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:27 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:28.008+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:28 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:28.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:28.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:29.056+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:29 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:29 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:29 np0005592159 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 7058 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:29 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:30.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:30 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 16 ])
Jan 22 10:34:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:30.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:30.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:31.012+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:31 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:32.041+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:32 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:32.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:32.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:32 np0005592159 ceph-mon[77081]: 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 16 ])
Jan 22 10:34:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:33.046+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:33 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:33 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:34.081+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:34 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:34:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:34.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:34:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:34.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:34 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:34 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:34 np0005592159 ceph-mon[77081]: Health check update: 21 slow ops, oldest one blocked for 7063 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:35 np0005592159 podman[284728]: 2026-01-22 15:34:35.078240817 +0000 UTC m=+0.130461648 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:34:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:35.091+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:35 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:35 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:35 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:36.135+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:36 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:36.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:36.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:37.110+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:37 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:37 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:38.136+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:38 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:38.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:38.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:38 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:38 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:39.127+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:39 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:40 np0005592159 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 7068 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:40 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:40.152+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:40 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:34:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:40.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:34:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:40.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:41.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:41 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:41 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:42.112+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:42 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:42.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:34:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:42.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:34:42 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:43 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:43.135+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:43 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:44.168+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:44.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:44.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:44 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:34:44 np0005592159 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 7073 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:44 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:45 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:45.138+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:34:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:45 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:34:45 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:46 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:46.097+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:46.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:46.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:46 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:46 np0005592159 podman[284892]: 2026-01-22 15:34:46.99704377 +0000 UTC m=+0.052875056 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 22 10:34:47 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:47.136+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:34:47.274 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:34:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:34:47.275 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:34:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:34:47.275 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #238. Immutable memtables: 0.
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.813861) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 153] Flushing memtable with next log file: 238
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087813940, "job": 153, "event": "flush_started", "num_memtables": 1, "num_entries": 1029, "num_deletes": 346, "total_data_size": 1639716, "memory_usage": 1658312, "flush_reason": "Manual Compaction"}
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 153] Level-0 flush table #239: started
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087825475, "cf_name": "default", "job": 153, "event": "table_file_creation", "file_number": 239, "file_size": 1076604, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 118060, "largest_seqno": 119084, "table_properties": {"data_size": 1071940, "index_size": 1995, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 14126, "raw_average_key_size": 22, "raw_value_size": 1061306, "raw_average_value_size": 1692, "num_data_blocks": 84, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 346, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096034, "oldest_key_time": 1769096034, "file_creation_time": 1769096087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 239, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 153] Flush lasted 11669 microseconds, and 6739 cpu microseconds.
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.825541) [db/flush_job.cc:967] [default] [JOB 153] Level-0 flush table #239: 1076604 bytes OK
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.825569) [db/memtable_list.cc:519] [default] Level-0 commit table #239 started
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.827496) [db/memtable_list.cc:722] [default] Level-0 commit table #239: memtable #1 done
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.827518) EVENT_LOG_v1 {"time_micros": 1769096087827510, "job": 153, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.827544) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 153] Try to delete WAL files size 1634075, prev total WAL file size 1634075, number of live WAL files 2.
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000235.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.828541) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130303430' seq:72057594037927935, type:22 .. '7061786F73003130323932' seq:0, type:0; will stop at (end)
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 154] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 153 Base level 0, inputs: [239(1051KB)], [237(11MB)]
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087828588, "job": 154, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [239], "files_L6": [237], "score": -1, "input_data_size": 13046319, "oldest_snapshot_seqno": -1}
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 154] Generated table #240: 14267 keys, 11330775 bytes, temperature: kUnknown
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087914190, "cf_name": "default", "job": 154, "event": "table_file_creation", "file_number": 240, "file_size": 11330775, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11253743, "index_size": 40247, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35717, "raw_key_size": 393619, "raw_average_key_size": 27, "raw_value_size": 11011917, "raw_average_value_size": 771, "num_data_blocks": 1445, "num_entries": 14267, "num_filter_entries": 14267, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 240, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.914572) [db/compaction/compaction_job.cc:1663] [default] [JOB 154] Compacted 1@0 + 1@6 files to L6 => 11330775 bytes
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.916349) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.1 rd, 132.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(22.6) write-amplify(10.5) OK, records in: 14978, records dropped: 711 output_compression: NoCompression
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.916382) EVENT_LOG_v1 {"time_micros": 1769096087916367, "job": 154, "event": "compaction_finished", "compaction_time_micros": 85750, "compaction_time_cpu_micros": 40289, "output_level": 6, "num_output_files": 1, "total_output_size": 11330775, "num_input_records": 14978, "num_output_records": 14267, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000239.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087916915, "job": 154, "event": "table_file_deletion", "file_number": 239}
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000237.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096087921304, "job": 154, "event": "table_file_deletion", "file_number": 237}
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.828482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.921429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.921435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.921437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.921439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:47 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:34:47.921441) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:34:48 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:48.157+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:48.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:48.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:48 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:49 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:49.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:49 np0005592159 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 7078 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:49 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:50 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:50.146+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:50.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:34:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:50.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:34:50 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:51 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:51.123+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:51 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:34:51 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:52 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:52.091+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:52.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:52.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:53 np0005592159 ceph-mon[77081]: 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:34:53 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:53.078+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:54 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:54.066+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:54 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:54 np0005592159 ceph-mon[77081]: Health check update: 42 slow ops, oldest one blocked for 7083 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:54.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:34:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:54.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:34:55 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:55.054+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:55 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:55 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:56 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:56.033+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:56 np0005592159 ceph-mon[77081]: 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:34:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:34:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:56.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:34:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:56.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:57 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:34:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:57.052+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:57 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:34:58 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:34:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:58.005+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:34:58.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:58 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:34:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:34:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:34:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:34:58.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:34:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:59.019+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:59 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:34:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:34:59 np0005592159 ceph-mon[77081]: Health check update: 91 slow ops, oldest one blocked for 7088 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:34:59 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:34:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:34:59.979+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:59 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:34:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:00.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:00.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:00 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:00.974+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:00 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:01 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:01.987+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:01 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:02.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:02.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:02 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:03.022+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:03 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:04.072+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:04 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:04 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:04.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:04.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:05.057+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:05 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:05 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:05 np0005592159 ceph-mon[77081]: Health check update: 37 slow ops, oldest one blocked for 7093 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:06 np0005592159 podman[285021]: 2026-01-22 15:35:06.084203778 +0000 UTC m=+0.133809638 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 22 10:35:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:06.097+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:06 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:06 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:06.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:06.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:07.109+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:07 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:07 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:08.079+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:08 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:08 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:08 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:08.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:08.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:09.084+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:09 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:09 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:09 np0005592159 ceph-mon[77081]: Health check update: 37 slow ops, oldest one blocked for 7098 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:10.110+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:10 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:10 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:10.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:10.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:11.094+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:11 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:11 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:12.119+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:12 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:12.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:12.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:13.095+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:13 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:13 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:14.107+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:14 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:14 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:14 np0005592159 ceph-mon[77081]: Health check update: 37 slow ops, oldest one blocked for 7103 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:35:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:14.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:35:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:14.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:15.131+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:15 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:15 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:16.088+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:16 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:16.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:16.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:17.133+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:17 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:17 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:17 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:18 np0005592159 podman[285104]: 2026-01-22 15:35:18.037349853 +0000 UTC m=+0.094450441 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:35:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:18.115+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:18 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:18 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:35:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/41108104' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:35:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:35:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/41108104' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:35:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:18.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:18.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:19.155+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:19 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:19 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:19 np0005592159 ceph-mon[77081]: Health check update: 37 slow ops, oldest one blocked for 7108 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:20.160+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:20 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:20.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:20.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:21.119+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:21 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:21 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:22.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:22 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:22 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:22 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:22.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:22.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:23.136+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:23 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:23 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:24.155+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:24 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:24 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:24 np0005592159 ceph-mon[77081]: Health check update: 37 slow ops, oldest one blocked for 7113 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:24.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:24.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:25.201+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:25 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:25 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:25 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:26.226+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:26 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:26 np0005592159 ceph-mon[77081]: 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:35:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:26.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:26.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:27.259+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:27 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:27 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:28.292+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:28 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:28.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:28 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:28.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:29 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:35:29 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.0 total, 600.0 interval#012Cumulative writes: 22K writes, 119K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.20 GB, 0.03 MB/s#012Cumulative WAL: 22K writes, 22K syncs, 1.00 writes per sync, written: 0.20 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1787 writes, 10K keys, 1787 commit groups, 1.0 writes per commit group, ingest: 16.58 MB, 0.03 MB/s#012Interval WAL: 1787 writes, 1787 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     54.4      2.33              0.47        77    0.030       0      0       0.0       0.0#012  L6      1/0   10.81 MB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   5.9    110.8     96.0      7.82              2.53        76    0.103    834K    46K       0.0       0.0#012 Sum      1/0   10.81 MB   0.0      0.8     0.1      0.7       0.9      0.1       0.0   6.9     85.4     86.5     10.15              3.00       153    0.066    834K    46K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.5     62.0     62.6      1.26              0.26        12    0.105     91K   5756       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.8     0.1      0.7       0.7      0.0       0.0   0.0    110.8     96.0      7.82              2.53        76    0.103    834K    46K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     54.5      2.33              0.47        76    0.031       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7200.0 total, 600.0 interval#012Flush(GB): cumulative 0.124, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.86 GB write, 0.12 MB/s write, 0.85 GB read, 0.12 MB/s read, 10.2 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 88.10 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000746 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(4586,83.06 MB,27.3233%) FilterBlock(153,2.27 MB,0.74629%) IndexBlock(153,2.77 MB,0.909996%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:35:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:29.317+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:29 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:29 np0005592159 ceph-mon[77081]: Health check update: 37 slow ops, oldest one blocked for 7118 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:29 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:30.322+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:30 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:30.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:30.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:30 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:31.281+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:31 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:32 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:32.246+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:32 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:32.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:32.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:33.248+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:33 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:34.255+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:34 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:34 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:34.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:34.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:35.283+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:35 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:35 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:35 np0005592159 ceph-mon[77081]: Health check update: 187 slow ops, oldest one blocked for 7123 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:35 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:35 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:36.278+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:36 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:36.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:36.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:36 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:37 np0005592159 podman[285182]: 2026-01-22 15:35:37.05040426 +0000 UTC m=+0.103292717 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 22 10:35:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:37.298+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:37 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:37 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:38.265+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:38 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:38.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:38.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:38 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:39.273+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:39 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:39 np0005592159 ceph-mon[77081]: Health check update: 187 slow ops, oldest one blocked for 7128 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:39 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:40.231+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:40 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:40.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:40.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:40 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:41.248+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:41 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:42.280+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:42 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:42 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:42.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:42.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:43.289+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:43 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:43 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:43 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:44.292+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:44.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:44 np0005592159 ceph-mon[77081]: Health check update: 187 slow ops, oldest one blocked for 7133 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:44 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:44.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:45.256+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:45 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:45 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:46.268+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:46 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:46.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:35:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:46.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:35:46 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:47.247+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:47 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:35:47.275 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:35:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:35:47.275 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:35:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:35:47.275 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:35:47 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:48.239+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:48 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:35:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:48.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:35:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:48.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:48 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:49 np0005592159 podman[285214]: 2026-01-22 15:35:49.000295 +0000 UTC m=+0.063595252 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:35:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:49.222+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:49 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:50 np0005592159 ceph-mon[77081]: Health check update: 187 slow ops, oldest one blocked for 7138 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:50 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:50.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:50 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:50.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:35:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:50.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:35:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:51.179+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:51 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:51 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:52.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:52 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:52 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:52.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:52.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:53 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:53 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:53.235+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:53 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:54 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:54 np0005592159 ceph-mon[77081]: Health check update: 187 slow ops, oldest one blocked for 7143 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:35:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:54.240+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:54 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:54.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:35:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:54.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:55.228+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:55 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:55 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:35:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:35:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:35:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:56.253+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:56 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:56.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:56.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:57 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:57 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:57.228+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:35:58 np0005592159 ceph-mon[77081]: 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:35:58 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:35:58 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:58.197+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:35:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:35:58.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:35:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:35:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:35:58.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:35:59 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:35:59 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:35:59.219+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:35:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:35:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:00 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:00.177+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:00 np0005592159 ceph-mon[77081]: Health check update: 187 slow ops, oldest one blocked for 7148 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:00 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:00.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:00.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:01 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:01.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:01 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:36:01 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:36:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:02.148+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:02 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:02 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:02 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:02.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:02.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:03.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:03 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:03 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:04.102+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:04 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:04.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:04.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:04 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:04 np0005592159 ceph-mon[77081]: Health check update: 101 slow ops, oldest one blocked for 7153 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:05.094+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:05 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:06.058+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:06 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:06.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:06.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:07.105+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:07 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:07 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:07 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:07 np0005592159 ceph-mgr[77438]: client.0 ms_handle_reset on v2:192.168.122.100:6800/1334415348
Jan 22 10:36:08 np0005592159 podman[285593]: 2026-01-22 15:36:08.016753227 +0000 UTC m=+0.076477684 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 22 10:36:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:08.065+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:08 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:08 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:08.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:08.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:09.019+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:09 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:09 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:09 np0005592159 ceph-mon[77081]: Health check update: 101 slow ops, oldest one blocked for 7158 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:10.047+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:10 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:10 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:10.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:10.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:11.009+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:11 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:11 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:12.022+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:12 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:12 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:12 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:12.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:12.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:13.033+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:13 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:13 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:14.064+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:14 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:14.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:14.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:14 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:14 np0005592159 ceph-mon[77081]: Health check update: 101 slow ops, oldest one blocked for 7163 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:15.067+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:15 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:15 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:16.073+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:16 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:16.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:16.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:16 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:17.092+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:17 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:17 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:18.108+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:18 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:36:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:18.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:36:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:18.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:18 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:19.153+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:19 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:19 np0005592159 ceph-mon[77081]: Health check update: 101 slow ops, oldest one blocked for 7168 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:19 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:20 np0005592159 podman[285677]: 2026-01-22 15:36:20.01457229 +0000 UTC m=+0.068928503 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 10:36:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:20.111+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:20 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:20.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:20.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:20 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:21.117+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:21 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:22.113+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:22 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:22 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:22.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:22.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:23.131+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:23 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:23 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:24.161+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:24 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:24.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:24.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:25.121+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:25 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:25 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:25 np0005592159 ceph-mon[77081]: Health check update: 101 slow ops, oldest one blocked for 7173 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:25 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:26.079+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:26 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:26.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:26.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:26 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:27.116+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:27 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:28.150+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:28 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:28.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:28.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:29.200+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:29 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:29 np0005592159 ceph-mon[77081]: 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:36:29 np0005592159 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:29 np0005592159 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:30.159+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:30 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:30 np0005592159 ceph-mon[77081]: Health check update: 101 slow ops, oldest one blocked for 7178 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:30 np0005592159 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:30.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:31.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:31.128+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:31 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:36:31 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.5 total, 600.0 interval#012Cumulative writes: 14K writes, 44K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 14K writes, 4846 syncs, 2.94 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 636 writes, 1117 keys, 636 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s#012Interval WAL: 636 writes, 315 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:36:31 np0005592159 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:32.157+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:32 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:36:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:32.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:36:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:33.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:33.108+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:33 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:33 np0005592159 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:34.072+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:34 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:34.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:34 np0005592159 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:34 np0005592159 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:34 np0005592159 ceph-mon[77081]: Health check update: 188 slow ops, oldest one blocked for 7183 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:36:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:35.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:36:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:35.113+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:35 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:36.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:36 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:36 np0005592159 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:36:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:36.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:36:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:37.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:37.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:37 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:37 np0005592159 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:37 np0005592159 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:38.091+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:38 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:38 np0005592159 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:36:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:38.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:36:39 np0005592159 podman[285755]: 2026-01-22 15:36:39.064052145 +0000 UTC m=+0.118546532 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 22 10:36:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:39.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:39.118+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:39 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:39 np0005592159 ceph-mon[77081]: Health check update: 188 slow ops, oldest one blocked for 7188 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:40.167+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:40 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:40.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:40 np0005592159 ceph-mon[77081]: 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:36:40 np0005592159 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:41.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:41.181+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:41 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:42.227+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:42 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:42 np0005592159 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:42 np0005592159 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:42.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:43.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:43.199+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:43 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:43 np0005592159 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:44.227+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:36:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:44.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:36:44 np0005592159 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:44 np0005592159 ceph-mon[77081]: Health check update: 120 slow ops, oldest one blocked for 7193 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:45.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:45.251+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:45 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:45 np0005592159 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:45 np0005592159 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:46.232+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:46 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:46.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:47 np0005592159 ceph-mon[77081]: 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:36:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:47.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:47.213+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:47 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:36:47.275 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:36:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:36:47.276 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:36:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:36:47.276 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:36:48 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:48.226+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:48 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:36:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:48.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:36:49 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:49.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:49.222+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:49 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:50 np0005592159 ceph-mon[77081]: Health check update: 120 slow ops, oldest one blocked for 7197 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:50 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:50.208+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:50 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:50.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:51 np0005592159 podman[285788]: 2026-01-22 15:36:51.035400781 +0000 UTC m=+0.087998045 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 10:36:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:51.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:51.180+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:51 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:51 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:52.193+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:52 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:52 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:52.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:53.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:53.157+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:53 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:53 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:54.142+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:54 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:54 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:54 np0005592159 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 7202 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:54.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:36:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:55.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:55.169+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:55 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:55 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:56.151+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:56 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:56 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:36:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:56.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:36:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:57.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:57.163+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:57 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:57 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:58.147+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:58 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:58 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:36:58.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:36:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:36:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:36:59.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:36:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:36:59.137+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:59 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:36:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:59 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:36:59 np0005592159 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 7207 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:36:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:00.109+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:00 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:00.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:00 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:01.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:01.134+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:01 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:01 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:02.178+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:02 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:02.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:02 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:03.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:03.168+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:03 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:03 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:04.164+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:04 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:04.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:04 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:04 np0005592159 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 7212 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:04 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:37:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:04 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:37:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:05.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:05.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:05 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:06 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:06.088+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:06 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:06.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:07.073+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:07 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:07 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:07.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:08.026+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:08 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:08 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:08.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:09.036+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:09 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:09 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:37:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:09.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:37:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:10 np0005592159 podman[285999]: 2026-01-22 15:37:10.029191595 +0000 UTC m=+0.081977500 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 22 10:37:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:10.066+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:10 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:10 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:10 np0005592159 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 7217 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:10.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:11.074+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:11 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:11.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:11 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:11 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:37:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:12.102+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:12 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:12 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:12.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:13.062+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:13 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:37:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:13.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:37:13 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:14.098+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:14 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:14 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:14 np0005592159 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 7222 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:14.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:15.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:15.148+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:15 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:15 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:16.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:16 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:16 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:16.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:17.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:17.164+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:17 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:17 np0005592159 ceph-mon[77081]: 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:37:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:18.174+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:18 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:18 np0005592159 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 10:37:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:18.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 10:37:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:19.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:19.125+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:19 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:19 np0005592159 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:19 np0005592159 ceph-mon[77081]: Health check update: 49 slow ops, oldest one blocked for 7227 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:20.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:20 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:20.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:20 np0005592159 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:21 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:21.115+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:21.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:21 np0005592159 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:22 np0005592159 podman[286135]: 2026-01-22 15:37:22.016148851 +0000 UTC m=+0.079163593 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 22 10:37:22 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:22.128+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:22.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:22 np0005592159 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:23.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:23.159+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:23 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:23 np0005592159 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:24.118+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:24 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:24.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:24 np0005592159 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:24 np0005592159 ceph-mon[77081]: Health check update: 132 slow ops, oldest one blocked for 7232 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:25.098+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:25 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:25.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:25 np0005592159 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:26.067+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:26 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:26.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:27.041+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:27 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:27 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:37:27.049 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:37:27 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:37:27.051 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:37:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:27.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:28.002+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:28 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:28 np0005592159 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:37:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:28.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:37:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:29.043+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:29 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:29 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:37:29.053 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:37:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:29.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:29 np0005592159 ceph-mon[77081]: 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:37:29 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:29 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:30.079+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:30 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:30 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:30 np0005592159 ceph-mon[77081]: Health check update: 132 slow ops, oldest one blocked for 7237 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:30.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:31.069+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:31 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:31.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:31 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:32.026+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:32 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:32 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:32.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:33.026+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:33 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:33.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:33 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:34.044+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:34 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:34 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:34 np0005592159 ceph-mon[77081]: Health check update: 96 slow ops, oldest one blocked for 7242 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:34.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:35.011+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:35 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:35.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:35 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:35.989+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:35 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:36 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:36.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:36 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:36.960+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:37.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:37 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:38 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:38.009+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #241. Immutable memtables: 0.
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.471479) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 155] Flushing memtable with next log file: 241
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258471572, "job": 155, "event": "flush_started", "num_memtables": 1, "num_entries": 2752, "num_deletes": 540, "total_data_size": 5172675, "memory_usage": 5270320, "flush_reason": "Manual Compaction"}
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 155] Level-0 flush table #242: started
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258494727, "cf_name": "default", "job": 155, "event": "table_file_creation", "file_number": 242, "file_size": 3350889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 119089, "largest_seqno": 121836, "table_properties": {"data_size": 3340601, "index_size": 5693, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 32349, "raw_average_key_size": 23, "raw_value_size": 3316033, "raw_average_value_size": 2397, "num_data_blocks": 239, "num_entries": 1383, "num_filter_entries": 1383, "num_deletions": 540, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096088, "oldest_key_time": 1769096088, "file_creation_time": 1769096258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 242, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 155] Flush lasted 23273 microseconds, and 7905 cpu microseconds.
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.494775) [db/flush_job.cc:967] [default] [JOB 155] Level-0 flush table #242: 3350889 bytes OK
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.494794) [db/memtable_list.cc:519] [default] Level-0 commit table #242 started
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.496819) [db/memtable_list.cc:722] [default] Level-0 commit table #242: memtable #1 done
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.496834) EVENT_LOG_v1 {"time_micros": 1769096258496828, "job": 155, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.496855) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 155] Try to delete WAL files size 5159195, prev total WAL file size 5159195, number of live WAL files 2.
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000238.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.498233) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130323931' seq:72057594037927935, type:22 .. '7061786F73003130353433' seq:0, type:0; will stop at (end)
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 156] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 155 Base level 0, inputs: [242(3272KB)], [240(10MB)]
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258498280, "job": 156, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [242], "files_L6": [240], "score": -1, "input_data_size": 14681664, "oldest_snapshot_seqno": -1}
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 156] Generated table #243: 14553 keys, 12845888 bytes, temperature: kUnknown
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258596384, "cf_name": "default", "job": 156, "event": "table_file_creation", "file_number": 243, "file_size": 12845888, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12765564, "index_size": 42835, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36421, "raw_key_size": 399026, "raw_average_key_size": 27, "raw_value_size": 12517622, "raw_average_value_size": 860, "num_data_blocks": 1556, "num_entries": 14553, "num_filter_entries": 14553, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096258, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 243, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.596650) [db/compaction/compaction_job.cc:1663] [default] [JOB 156] Compacted 1@0 + 1@6 files to L6 => 12845888 bytes
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.598519) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.5 rd, 130.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 15650, records dropped: 1097 output_compression: NoCompression
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.598533) EVENT_LOG_v1 {"time_micros": 1769096258598526, "job": 156, "event": "compaction_finished", "compaction_time_micros": 98201, "compaction_time_cpu_micros": 44122, "output_level": 6, "num_output_files": 1, "total_output_size": 12845888, "num_input_records": 15650, "num_output_records": 14553, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000242.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258599137, "job": 156, "event": "table_file_deletion", "file_number": 242}
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000240.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096258600913, "job": 156, "event": "table_file_deletion", "file_number": 240}
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.498113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.600951) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.600957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.600959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.600962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:37:38.600964) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:37:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:37:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:38.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:37:38 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:38.994+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:39.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:39 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:39 np0005592159 ceph-mon[77081]: Health check update: 96 slow ops, oldest one blocked for 7247 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:39 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:39.981+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:40 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:40.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:40 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:40.978+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:41 np0005592159 podman[286213]: 2026-01-22 15:37:41.087294979 +0000 UTC m=+0.149796012 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 22 10:37:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:41.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:41 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:41 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:41.991+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:42.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:42 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:42.952+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:43.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:43 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:43 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:43 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:43.915+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:44.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:44.943+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:45 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:45 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:45 np0005592159 ceph-mon[77081]: Health check update: 96 slow ops, oldest one blocked for 7253 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:37:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:45.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:37:45 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:45.903+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:46 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:46.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:46 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:46.885+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:47 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:47 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:47.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:37:47.276 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:37:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:37:47.277 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:37:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:37:47.277 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:37:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:47.935+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:47 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:37:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:48.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:37:48 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:48.944+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:49.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:49 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:49 np0005592159 ceph-mon[77081]: Health check update: 96 slow ops, oldest one blocked for 7258 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:49 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:49.953+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:50 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:50.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:50 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:50.996+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:51.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:51 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:51 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:51.980+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:52 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:52.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:52 np0005592159 podman[286245]: 2026-01-22 15:37:52.992255956 +0000 UTC m=+0.051525158 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 10:37:53 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:53.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:53.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:53 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:54 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:54.027+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:54 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:54 np0005592159 ceph-mon[77081]: Health check update: 96 slow ops, oldest one blocked for 7263 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:37:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:54.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:37:54 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:54.998+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:37:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:55.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:37:56 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:56.005+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:56 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:37:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:56.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:37:57 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:56.999+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:37:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:57.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:57 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:57 np0005592159 ceph-mon[77081]: 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:37:58 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:58.023+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:37:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:37:58.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:59 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:37:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:37:59.014+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:37:59 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:37:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:37:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:37:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:37:59.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:37:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:00 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:00.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:00.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:00 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:00 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:00 np0005592159 ceph-mon[77081]: Health check update: 96 slow ops, oldest one blocked for 7267 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:01 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:01.024+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:01 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e181 e181: 3 total, 3 up, 3 in
Jan 22 10:38:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:01.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:01 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:01 np0005592159 ceph-osd[79779]: osd.2 181 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:01.992+0000 7f47f8ed4640 -1 osd.2 181 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:38:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:02.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:38:02 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e182 e182: 3 total, 3 up, 3 in
Jan 22 10:38:02 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:02 np0005592159 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 151 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 151 slow requests (by type [ 'delayed' : 151 ] most affected pool [ 'vms' : 90 ])
Jan 22 10:38:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:02.995+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 151 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:03.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:03 np0005592159 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:03.988+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:04 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:04 np0005592159 ceph-mon[77081]: 151 slow requests (by type [ 'delayed' : 151 ] most affected pool [ 'vms' : 90 ])
Jan 22 10:38:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:04.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e182 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:04 np0005592159 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:04.958+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:05.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:05 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:05 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7272 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:06 np0005592159 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:06.002+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:06 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:06.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:06.963+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:06 np0005592159 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:07.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:07 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:07.955+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:07 np0005592159 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:08.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:08.986+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:08 np0005592159 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:09.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:09 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 e183: 3 total, 3 up, 3 in
Jan 22 10:38:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:09.988+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:09 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:10 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:10 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:10 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7278 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:10.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:11.019+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:11 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:11.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:11 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:12.006+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:12 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:12 np0005592159 podman[286457]: 2026-01-22 15:38:12.042589586 +0000 UTC m=+0.108916176 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 22 10:38:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:12.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:12 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:12 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:12.989+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:12 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:13.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:13 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:38:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:13 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:38:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:13.958+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:13 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:14.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:14.992+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:14 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:15 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:15 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:15 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7283 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:15.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:16.014+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:16 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:16 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:16.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:17.020+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:17 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:38:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:17.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:38:17 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:17.981+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:17 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:38:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3149379598' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:38:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:38:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3149379598' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:38:18 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:18.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:19.011+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:19 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:19.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:19 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:19 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7288 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:19.981+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:19 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:20 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:38:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:20.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:38:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:20.937+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:20 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:21.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:21 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:21 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:38:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:21.968+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:21 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:22.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:22 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:23.012+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:23 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:23.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:23 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:23.983+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:23 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:23 np0005592159 podman[286590]: 2026-01-22 15:38:23.99862334 +0000 UTC m=+0.054120309 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 22 10:38:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:24.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:24 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:24 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:24 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7293 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:24.938+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:24 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:25.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:25 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:25.981+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:25 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:26.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:26 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:27.005+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:27 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:27.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:27 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:28.036+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:28 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:28.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:29.022+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:29 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:29 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:29.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:30.051+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:30 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:30 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:30 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7298 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:30.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:31.047+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:31 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:31 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:31.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:31 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:38:31.236 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:38:31 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:38:31.237 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:38:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:32.028+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:32 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:32 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:32.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:33.011+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:33 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:33.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:33 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:34.059+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:34 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:34 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:34 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7302 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:34.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:35.053+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:35 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:35.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:35 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:36.021+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:36 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:36 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:38:36.239 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:38:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:36.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:37.047+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:37 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:37 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:37.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:38.009+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:38 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:38 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:38 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:38.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:39.031+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:39 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:39 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:39.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:40.070+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:40 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:40 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:40 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7307 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:40.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:41 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:41.114+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:41 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:41.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:42.158+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:42 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:42 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:42.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:43 np0005592159 podman[286670]: 2026-01-22 15:38:43.028093611 +0000 UTC m=+0.087231704 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 22 10:38:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:43.159+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:43 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:43.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:43 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:44.124+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:44 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:44 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7312 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.002000054s ======
Jan 22 10:38:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:44.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000054s
Jan 22 10:38:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:45.092+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:45 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:45.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:45 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:46.130+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:46 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:46 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:46.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:47.090+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:47 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:47.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:38:47.278 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:38:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:38:47.278 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:38:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:38:47.278 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:38:47 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:48.058+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:48 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:48.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:49 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:49.036+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:38:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:49.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #244. Immutable memtables: 0.
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.718267) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 157] Flushing memtable with next log file: 244
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329718375, "job": 157, "event": "flush_started", "num_memtables": 1, "num_entries": 1305, "num_deletes": 380, "total_data_size": 2117760, "memory_usage": 2160496, "flush_reason": "Manual Compaction"}
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 157] Level-0 flush table #245: started
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329734125, "cf_name": "default", "job": 157, "event": "table_file_creation", "file_number": 245, "file_size": 1389488, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 121841, "largest_seqno": 123141, "table_properties": {"data_size": 1384040, "index_size": 2458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 16488, "raw_average_key_size": 22, "raw_value_size": 1371300, "raw_average_value_size": 1838, "num_data_blocks": 104, "num_entries": 746, "num_filter_entries": 746, "num_deletions": 380, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096259, "oldest_key_time": 1769096259, "file_creation_time": 1769096329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 245, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 157] Flush lasted 15937 microseconds, and 8463 cpu microseconds.
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.734204) [db/flush_job.cc:967] [default] [JOB 157] Level-0 flush table #245: 1389488 bytes OK
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.734234) [db/memtable_list.cc:519] [default] Level-0 commit table #245 started
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.737445) [db/memtable_list.cc:722] [default] Level-0 commit table #245: memtable #1 done
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.737519) EVENT_LOG_v1 {"time_micros": 1769096329737503, "job": 157, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.737558) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 157] Try to delete WAL files size 2110885, prev total WAL file size 2110885, number of live WAL files 2.
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000241.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.738788) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0035373930' seq:72057594037927935, type:22 .. '6C6F676D0036303433' seq:0, type:0; will stop at (end)
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 158] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 157 Base level 0, inputs: [245(1356KB)], [243(12MB)]
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329738872, "job": 158, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [245], "files_L6": [243], "score": -1, "input_data_size": 14235376, "oldest_snapshot_seqno": -1}
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 158] Generated table #246: 14520 keys, 14040889 bytes, temperature: kUnknown
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329839267, "cf_name": "default", "job": 158, "event": "table_file_creation", "file_number": 246, "file_size": 14040889, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13959194, "index_size": 44270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36357, "raw_key_size": 398915, "raw_average_key_size": 27, "raw_value_size": 13710225, "raw_average_value_size": 944, "num_data_blocks": 1614, "num_entries": 14520, "num_filter_entries": 14520, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096329, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 246, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.840831) [db/compaction/compaction_job.cc:1663] [default] [JOB 158] Compacted 1@0 + 1@6 files to L6 => 14040889 bytes
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.842082) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.8 rd, 138.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 12.3 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(20.4) write-amplify(10.1) OK, records in: 15299, records dropped: 779 output_compression: NoCompression
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.842115) EVENT_LOG_v1 {"time_micros": 1769096329842099, "job": 158, "event": "compaction_finished", "compaction_time_micros": 101085, "compaction_time_cpu_micros": 35810, "output_level": 6, "num_output_files": 1, "total_output_size": 14040889, "num_input_records": 15299, "num_output_records": 14520, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000245.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329844398, "job": 158, "event": "table_file_deletion", "file_number": 245}
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000243.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096329849041, "job": 158, "event": "table_file_deletion", "file_number": 243}
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.738680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.849263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.849271) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.849273) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.849275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:38:49.849279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:38:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:50.030+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:50 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:50.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:50 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:50 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:50 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7317 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:51.034+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:51 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:51.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:51 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:52.030+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:52 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:52.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:52 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:53.024+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:53 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:53.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:53 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:54.014+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:54 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:54.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:38:54 np0005592159 podman[286703]: 2026-01-22 15:38:54.988694149 +0000 UTC m=+0.053952014 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:38:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:55.021+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:55 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:55 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:55 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:55 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7322 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:38:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:38:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:55.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:38:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:55.984+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:55 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:56 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:56.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:56.936+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:56 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:38:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:57.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:38:57 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:57.890+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:57 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:38:58.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:58.844+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:58 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:59 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:59 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:38:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:38:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:38:59.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:38:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:38:59.887+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:59 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:38:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:38:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:00 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:00 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7327 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:00 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:39:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:00.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:39:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:00.923+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:00 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:01.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:01.895+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:01 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:02 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:02 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:02.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:02.893+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:02 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:03.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:03 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:03.847+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:03 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:04 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:04.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:04.853+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:04 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:05.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:05 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7333 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:05 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:05.831+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:05 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:06 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:06.803+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:06 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:06.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:07.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:07.765+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:07 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:07 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:08.724+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:08 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:08.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:09 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:39:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:09.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:39:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:09.742+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:09 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:39:09.916 143497 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=66, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '4a:c6:58', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '6a:1c:e5:1b:fd:6b'}, ipsec=False) old=SB_Global(nb_cfg=65) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Jan 22 10:39:09 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:39:09.917 143497 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Jan 22 10:39:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:10 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:10 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7338 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:10 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:10.762+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:10 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:39:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:10.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:39:11 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:11.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:11.801+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:11 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:12 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:12.793+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:12 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:12.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:13 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:13.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:13.761+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:13 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:14 np0005592159 podman[286783]: 2026-01-22 15:39:14.05845642 +0000 UTC m=+0.118858697 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller)
Jan 22 10:39:14 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:14.752+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:14 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:14.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:15 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7343 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:15 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:15.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:15.704+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:15 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:16.746+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:16 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:16.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:17 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:17.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:17.779+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:17 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:18 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:18 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:18.805+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:18 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:39:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:18.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:39:18 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:39:18.919 143497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c4fa18b6-ed0f-47ac-8eec-d1399749aa8e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '66'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Jan 22 10:39:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:19.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:19 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:19.840+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:19 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:39:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:20.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:39:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:20.848+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:20 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:21 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7348 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:21 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:21.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:21.812+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:21 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:22 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:22 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:22 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:22 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:22.833+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:22 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:22.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:23 np0005592159 podman[287136]: 2026-01-22 15:39:23.049440985 +0000 UTC m=+0.097704239 container create 7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Jan 22 10:39:23 np0005592159 podman[287136]: 2026-01-22 15:39:22.982473657 +0000 UTC m=+0.030736941 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:39:23 np0005592159 systemd[1]: Started libpod-conmon-7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd.scope.
Jan 22 10:39:23 np0005592159 systemd[1]: Started libcrun container.
Jan 22 10:39:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:23.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:23 np0005592159 podman[287136]: 2026-01-22 15:39:23.319438829 +0000 UTC m=+0.367702083 container init 7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:39:23 np0005592159 podman[287136]: 2026-01-22 15:39:23.326487802 +0000 UTC m=+0.374751086 container start 7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Jan 22 10:39:23 np0005592159 podman[287136]: 2026-01-22 15:39:23.331010975 +0000 UTC m=+0.379274259 container attach 7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 22 10:39:23 np0005592159 blissful_brattain[287153]: 167 167
Jan 22 10:39:23 np0005592159 systemd[1]: libpod-7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd.scope: Deactivated successfully.
Jan 22 10:39:23 np0005592159 podman[287136]: 2026-01-22 15:39:23.343188008 +0000 UTC m=+0.391451302 container died 7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Jan 22 10:39:23 np0005592159 systemd[1]: var-lib-containers-storage-overlay-d6e4c25ea71f036599990632dc70bab84b231a14431ff4efd4e70ae2eb0e70f5-merged.mount: Deactivated successfully.
Jan 22 10:39:23 np0005592159 podman[287136]: 2026-01-22 15:39:23.391563719 +0000 UTC m=+0.439826963 container remove 7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_brattain, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 22 10:39:23 np0005592159 systemd[1]: libpod-conmon-7388ce5ee3d99173f70197fceb574b7daa841b8d9bb8a2d748a9c53909dc30fd.scope: Deactivated successfully.
Jan 22 10:39:23 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:23 np0005592159 podman[287178]: 2026-01-22 15:39:23.581434444 +0000 UTC m=+0.065365176 container create 4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_newton, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 22 10:39:23 np0005592159 systemd[1]: Started libpod-conmon-4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db.scope.
Jan 22 10:39:23 np0005592159 podman[287178]: 2026-01-22 15:39:23.549490352 +0000 UTC m=+0.033421094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Jan 22 10:39:23 np0005592159 systemd[1]: Started libcrun container.
Jan 22 10:39:23 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b1142ccbf335480b995577fe7d87f8df451a3753a1aab61efbc6016c18fc4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:23 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b1142ccbf335480b995577fe7d87f8df451a3753a1aab61efbc6016c18fc4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:23 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b1142ccbf335480b995577fe7d87f8df451a3753a1aab61efbc6016c18fc4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:23 np0005592159 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8b1142ccbf335480b995577fe7d87f8df451a3753a1aab61efbc6016c18fc4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 22 10:39:23 np0005592159 podman[287178]: 2026-01-22 15:39:23.683665606 +0000 UTC m=+0.167596338 container init 4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_newton, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Jan 22 10:39:23 np0005592159 podman[287178]: 2026-01-22 15:39:23.690574465 +0000 UTC m=+0.174505157 container start 4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_newton, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 22 10:39:23 np0005592159 podman[287178]: 2026-01-22 15:39:23.693641089 +0000 UTC m=+0.177571861 container attach 4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 22 10:39:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:23.833+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:23 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:24.826+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:24 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:24.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]: [
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:    {
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:        "available": false,
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:        "ceph_device": false,
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:        "device_id": "QEMU_DVD-ROM_QM00001",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:        "lsm_data": {},
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:        "lvs": [],
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:        "path": "/dev/sr0",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:        "rejected_reasons": [
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "Insufficient space (<5GB)",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "Has a FileSystem"
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:        ],
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:        "sys_api": {
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "actuators": null,
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "device_nodes": "sr0",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "devname": "sr0",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "human_readable_size": "482.00 KB",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "id_bus": "ata",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "model": "QEMU DVD-ROM",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "nr_requests": "2",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "parent": "/dev/sr0",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "partitions": {},
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "path": "/dev/sr0",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "removable": "1",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "rev": "2.5+",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "ro": "0",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "rotational": "1",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "sas_address": "",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "sas_device_handle": "",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "scheduler_mode": "mq-deadline",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "sectors": 0,
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "sectorsize": "2048",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "size": 493568.0,
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "support_discard": "2048",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "type": "disk",
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:            "vendor": "QEMU"
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:        }
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]:    }
Jan 22 10:39:24 np0005592159 nostalgic_newton[287195]: ]
Jan 22 10:39:25 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:25 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7353 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:25 np0005592159 systemd[1]: libpod-4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db.scope: Deactivated successfully.
Jan 22 10:39:25 np0005592159 systemd[1]: libpod-4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db.scope: Consumed 1.361s CPU time.
Jan 22 10:39:25 np0005592159 podman[287178]: 2026-01-22 15:39:25.028786771 +0000 UTC m=+1.512717503 container died 4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_newton, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:39:25 np0005592159 systemd[1]: var-lib-containers-storage-overlay-c8b1142ccbf335480b995577fe7d87f8df451a3753a1aab61efbc6016c18fc4a-merged.mount: Deactivated successfully.
Jan 22 10:39:25 np0005592159 podman[287178]: 2026-01-22 15:39:25.088654376 +0000 UTC m=+1.572585068 container remove 4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 22 10:39:25 np0005592159 systemd[1]: libpod-conmon-4fc8cc84139abc258a816c26eaa0a142ddd799f3b381fccee791026af3a708db.scope: Deactivated successfully.
Jan 22 10:39:25 np0005592159 podman[288478]: 2026-01-22 15:39:25.143242407 +0000 UTC m=+0.077555029 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:39:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:25.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:25.871+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:25 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:26.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:26 np0005592159 ceph-mon[77081]: 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:39:26 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:26 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:26.921+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:26 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:27.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:27.908+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:27 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:39:28 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:28 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:39:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:28.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:28.940+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:28 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:29 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:29.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:29.934+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:29 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:30 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:30 np0005592159 ceph-mon[77081]: Health check update: 195 slow ops, oldest one blocked for 7358 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:30.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:30.972+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:30 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:31.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:31 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:31.949+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:31 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:32 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:32.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:32.975+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:32 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:33.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:33.986+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:33 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:34 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:34 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:39:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:34.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:34.960+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:34 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:35.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:35 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:35 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 7363 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:35.924+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:35 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:36.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:36.937+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:36 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:37 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:37 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:37.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:37.972+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:37 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:38.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:39.002+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:39 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:39.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:39 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:39 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:39.975+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:39 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:40 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:40 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 7368 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:40.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:40.997+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:40 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:41.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:41 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:41.965+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:41 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:42 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:42.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:42.932+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:42 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:39:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:43.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:39:43 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:43.940+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:43 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:44 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:39:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:44.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:39:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:44.944+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:45 np0005592159 podman[288616]: 2026-01-22 15:39:45.054109032 +0000 UTC m=+0.103554870 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 22 10:39:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:45.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:45 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:45 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 7373 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:45.943+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:45 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:46 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:46.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:46.966+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:46 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:39:47.279 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:39:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:39:47.279 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:39:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:39:47.280 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:39:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:47.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:47 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:47.948+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:47 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:48 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:48.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:48.933+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:48 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:49.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:49 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:49.895+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:49 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:50 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:50 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 7378 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:50 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:50.871+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:50 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:50.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:51.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:51 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:51.904+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:51 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:52 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:39:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:52.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:39:52 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:52.903+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:53.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:53 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:53 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:53.893+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:54 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:39:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:54.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:39:54 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:54.884+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:39:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:55.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:55 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 7383 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:39:55 np0005592159 ceph-mon[77081]: 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:39:55 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:55.880+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:55 np0005592159 podman[288647]: 2026-01-22 15:39:55.996055899 +0000 UTC m=+0.053387379 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 22 10:39:56 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:56.860+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:56.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:56 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:57.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:57 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:57.857+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:57 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:57 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:58 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:58.856+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:39:58.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:59 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:39:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:39:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:39:59.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:39:59 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:39:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:39:59.814+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:39:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:00 np0005592159 ceph-mon[77081]: Health check update: 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:00 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:00 np0005592159 ceph-mon[77081]: Health detail: HEALTH_WARN 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops
Jan 22 10:40:00 np0005592159 ceph-mon[77081]: [WRN] SLOW_OPS: 79 slow ops, oldest one blocked for 7388 sec, osd.2 has slow ops
Jan 22 10:40:00 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:00.823+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:00.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:01.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:01.867+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:01 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:02 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:02.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:02.897+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:02 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:03 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:03 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:03.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:03.853+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:03 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #247. Immutable memtables: 0.
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.827852) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 159] Flushing memtable with next log file: 247
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404827927, "job": 159, "event": "flush_started", "num_memtables": 1, "num_entries": 1349, "num_deletes": 384, "total_data_size": 2279929, "memory_usage": 2306168, "flush_reason": "Manual Compaction"}
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 159] Level-0 flush table #248: started
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404839561, "cf_name": "default", "job": 159, "event": "table_file_creation", "file_number": 248, "file_size": 989946, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 123146, "largest_seqno": 124490, "table_properties": {"data_size": 985175, "index_size": 1845, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 16963, "raw_average_key_size": 23, "raw_value_size": 973300, "raw_average_value_size": 1333, "num_data_blocks": 77, "num_entries": 730, "num_filter_entries": 730, "num_deletions": 384, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096330, "oldest_key_time": 1769096330, "file_creation_time": 1769096404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 248, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 159] Flush lasted 11760 microseconds, and 6515 cpu microseconds.
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.839620) [db/flush_job.cc:967] [default] [JOB 159] Level-0 flush table #248: 989946 bytes OK
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.839645) [db/memtable_list.cc:519] [default] Level-0 commit table #248 started
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.841971) [db/memtable_list.cc:722] [default] Level-0 commit table #248: memtable #1 done
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.842012) EVENT_LOG_v1 {"time_micros": 1769096404842003, "job": 159, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.842036) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 159] Try to delete WAL files size 2272850, prev total WAL file size 2272850, number of live WAL files 2.
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000244.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.842987) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353039' seq:72057594037927935, type:22 .. '6D6772737461740033373632' seq:0, type:0; will stop at (end)
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 160] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 159 Base level 0, inputs: [248(966KB)], [246(13MB)]
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404843027, "job": 160, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [248], "files_L6": [246], "score": -1, "input_data_size": 15030835, "oldest_snapshot_seqno": -1}
Jan 22 10:40:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:04.859+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:04 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:04.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 160] Generated table #249: 14499 keys, 11546892 bytes, temperature: kUnknown
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404950079, "cf_name": "default", "job": 160, "event": "table_file_creation", "file_number": 249, "file_size": 11546892, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11468961, "index_size": 40570, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36293, "raw_key_size": 398246, "raw_average_key_size": 27, "raw_value_size": 11223898, "raw_average_value_size": 774, "num_data_blocks": 1460, "num_entries": 14499, "num_filter_entries": 14499, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096404, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 249, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.950489) [db/compaction/compaction_job.cc:1663] [default] [JOB 160] Compacted 1@0 + 1@6 files to L6 => 11546892 bytes
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.952428) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.3 rd, 107.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 13.4 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(26.8) write-amplify(11.7) OK, records in: 15250, records dropped: 751 output_compression: NoCompression
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.952449) EVENT_LOG_v1 {"time_micros": 1769096404952439, "job": 160, "event": "compaction_finished", "compaction_time_micros": 107163, "compaction_time_cpu_micros": 43117, "output_level": 6, "num_output_files": 1, "total_output_size": 11546892, "num_input_records": 15250, "num_output_records": 14499, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000248.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404952930, "job": 160, "event": "table_file_deletion", "file_number": 248}
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000246.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096404955987, "job": 160, "event": "table_file_deletion", "file_number": 246}
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.842900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956093) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:04 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:04.956095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:05.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:05 np0005592159 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 7393 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:05 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:05.877+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:05 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:06 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:06.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:06.927+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:06 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:40:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:07.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:40:07 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:07.959+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:07 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:08.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:08.963+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:08 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:09.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:09 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:09 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:09.974+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:09 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:10 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:10 np0005592159 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 7398 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:40:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:10.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:40:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:10.994+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:10 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:11.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:11 np0005592159 ceph-mon[77081]: 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:40:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:11.958+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:11 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:12 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:12.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:12.912+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:12 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:13.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:13.935+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:13 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:14 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:14 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:14 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:14.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:14 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:14.975+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:14 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:15.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:15 np0005592159 ceph-mon[77081]: Health check update: 98 slow ops, oldest one blocked for 7403 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:15.957+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:15 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:16 np0005592159 podman[288727]: 2026-01-22 15:40:16.068222726 +0000 UTC m=+0.122751093 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 10:40:16 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:16.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:16.998+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:16 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:17.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:17 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:18.001+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:18 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:18 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:18.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:19.031+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:19 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:40:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:19.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:40:19 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:19 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:19 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:19 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7408 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:19.996+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:19 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:20.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:20 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:20.987+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:20 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:21.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:21.983+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:21 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:21 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:22.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:22.940+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:22 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:22 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:40:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:23.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:40:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:23.936+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:23 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:24 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:24 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:24.891+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:24 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:24.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:24 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:25 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7413 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:25 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:25.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:25.892+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:25 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:26 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:26 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:26 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:26 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:26.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:26.927+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:26 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:26 np0005592159 podman[288808]: 2026-01-22 15:40:26.997355346 +0000 UTC m=+0.061008668 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 22 10:40:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:27.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:27 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:27.921+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:27 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:28.882+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:28 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:28 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:28 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:28 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:28.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:28 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:29.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:29 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:29 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7418 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:29.929+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:29 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:29 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:30 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:30 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:30 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:30.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:30 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:30.943+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:30 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:31.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:31.951+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:31 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #250. Immutable memtables: 0.
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.968991) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 161] Flushing memtable with next log file: 250
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431969028, "job": 161, "event": "flush_started", "num_memtables": 1, "num_entries": 638, "num_deletes": 298, "total_data_size": 728685, "memory_usage": 741224, "flush_reason": "Manual Compaction"}
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 161] Level-0 flush table #251: started
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431973723, "cf_name": "default", "job": 161, "event": "table_file_creation", "file_number": 251, "file_size": 476871, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 124495, "largest_seqno": 125128, "table_properties": {"data_size": 473883, "index_size": 831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 9239, "raw_average_key_size": 21, "raw_value_size": 467164, "raw_average_value_size": 1076, "num_data_blocks": 36, "num_entries": 434, "num_filter_entries": 434, "num_deletions": 298, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096405, "oldest_key_time": 1769096405, "file_creation_time": 1769096431, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 251, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 161] Flush lasted 4757 microseconds, and 1762 cpu microseconds.
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.973751) [db/flush_job.cc:967] [default] [JOB 161] Level-0 flush table #251: 476871 bytes OK
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.973764) [db/memtable_list.cc:519] [default] Level-0 commit table #251 started
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975236) [db/memtable_list.cc:722] [default] Level-0 commit table #251: memtable #1 done
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975248) EVENT_LOG_v1 {"time_micros": 1769096431975245, "job": 161, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975261) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 161] Try to delete WAL files size 724905, prev total WAL file size 724905, number of live WAL files 2.
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000247.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975686) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130353432' seq:72057594037927935, type:22 .. '7061786F73003130373934' seq:0, type:0; will stop at (end)
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 162] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 161 Base level 0, inputs: [251(465KB)], [249(11MB)]
Jan 22 10:40:31 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096431975752, "job": 162, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [251], "files_L6": [249], "score": -1, "input_data_size": 12023763, "oldest_snapshot_seqno": -1}
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 162] Generated table #252: 14328 keys, 10215474 bytes, temperature: kUnknown
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432082002, "cf_name": "default", "job": 162, "event": "table_file_creation", "file_number": 252, "file_size": 10215474, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10139890, "index_size": 38671, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 35845, "raw_key_size": 394979, "raw_average_key_size": 27, "raw_value_size": 9899036, "raw_average_value_size": 690, "num_data_blocks": 1380, "num_entries": 14328, "num_filter_entries": 14328, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096431, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 252, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.082436) [db/compaction/compaction_job.cc:1663] [default] [JOB 162] Compacted 1@0 + 1@6 files to L6 => 10215474 bytes
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.085066) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 113.1 rd, 96.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 11.0 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(46.6) write-amplify(21.4) OK, records in: 14933, records dropped: 605 output_compression: NoCompression
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.085097) EVENT_LOG_v1 {"time_micros": 1769096432085084, "job": 162, "event": "compaction_finished", "compaction_time_micros": 106337, "compaction_time_cpu_micros": 52621, "output_level": 6, "num_output_files": 1, "total_output_size": 10215474, "num_input_records": 14933, "num_output_records": 14328, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000251.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432085438, "job": 162, "event": "table_file_deletion", "file_number": 251}
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000249.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096432089433, "job": 162, "event": "table_file_deletion", "file_number": 249}
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:31.975577) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.089523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.089530) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.089533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.089536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:40:32.089539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:40:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:32.904+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:32 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:32 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:32 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:40:32 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:32.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:32 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:33.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:33.946+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:33 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:34 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:34 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:34 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:34.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:34.925+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:34 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:34 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:34 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:40:34 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7423 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:34 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:34 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:40:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:35.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:35.926+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:35 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:36 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:36.911+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:36 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:36 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:36 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:36 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:36.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:37 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:37.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:37.891+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:37 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:38 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:38.859+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:38 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:38 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:38 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:38 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:38.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:39.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:39 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:39.853+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:39 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:40 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7428 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:40 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:40.874+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:40 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:40 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:40 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:40 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:40.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:41.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:41.888+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:41 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:42 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:42.870+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:42 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:42 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:42 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:40:42 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:42.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:40:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:43.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:43 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:43 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:43 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:40:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:43.874+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:43 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:44.902+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:44 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:44 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:44 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:44.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:44 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:45.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:45.902+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:45 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:45 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7433 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:45 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:46.913+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:46 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:46 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:46 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:46 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:46.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:47 np0005592159 podman[289067]: 2026-01-22 15:40:47.090385811 +0000 UTC m=+0.146322527 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:40:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:40:47.280 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:40:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:40:47.281 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:40:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:40:47.282 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:40:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:47.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:47 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:47 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:47.907+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:47 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:48 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:48 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:48.868+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:48 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:48 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:48 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:48.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:49.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:49 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:49.908+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:49 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:49 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7438 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:50.920+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:50 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:50 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:50 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:40:50 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:50.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:40:50 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:50 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:51.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:51.951+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:51 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:52 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:52 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:52 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:52.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:52.946+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:52 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:53 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:53.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:53.953+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:53 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:54 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:54.915+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:54 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:54 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:54 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:54 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:54.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:54 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:40:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:55.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:55 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:55 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7443 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:40:55 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:55.918+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:55 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:56 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:56.920+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:56 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:56 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:56 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:56 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:56.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:57.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:57 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:57.917+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:57 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:57 np0005592159 podman[289149]: 2026-01-22 15:40:57.97564934 +0000 UTC m=+0.040724333 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 22 10:40:58 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:58 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:58 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:58 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:40:58.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:58.965+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:58 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:40:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:40:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:40:59.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:40:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:40:59.949+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:59 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:40:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:40:59 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:00 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:00 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7448 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:00 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:00 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:00 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:00.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:00.947+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:00 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:01.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:01.917+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:01 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:02 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:02.914+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:02 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:02 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:02 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:41:02 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:02.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:41:03 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:03 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:03.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:03.925+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:03 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:04 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:04 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:04 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:04 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:04 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:04.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:04 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:04.972+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:04 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:05 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7453 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:05.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:05.974+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:05 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:06 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:06 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:06 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:06 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:06.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:06.971+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:06 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:07 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:07.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:07.999+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:08 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:08 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:08 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:08 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:08 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:08.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:08.985+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:08 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:09.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:09.995+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:09 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:10 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:10 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:10 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:10 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:10.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:11.039+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:11 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:11 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:11 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7458 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:11 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:11.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:11.993+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:11 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:12 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:12 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:12 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:12 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:12.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:13.025+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:13 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:13 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:13.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:14.072+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:14 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:14 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:14 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:14 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:14 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:14.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:15.119+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:15 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:15.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:15 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:15 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7463 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:16.115+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:16 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:16 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:16 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:16 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:16 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:16.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:17.147+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:17 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:17.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:17 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:18 np0005592159 podman[289228]: 2026-01-22 15:41:18.05962804 +0000 UTC m=+0.095834618 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:41:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:18.112+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:18 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:18 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:18 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:18 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:18 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:18.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:19.131+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:19 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:41:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:19.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:41:19 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:20.120+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:20 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:20 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:20 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7467 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:20 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:20 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:41:20 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:20.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:41:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:21.077+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:21 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:21.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:21 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:22.065+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:22 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:22 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:22 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:22 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:22 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:22.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:23.041+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:23 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:23.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:23 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:24.073+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:24 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:24 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:24 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:24 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:24 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:24.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:25.090+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:25 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:41:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:25.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:41:25 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:25 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7472 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:26.050+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:26 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:27.061+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:27 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:27.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:27.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:27 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:28.016+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:28 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:28 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:28 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:28 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:29 np0005592159 podman[289259]: 2026-01-22 15:41:29.048629394 +0000 UTC m=+0.103666882 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:41:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:29.057+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:29 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:29.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:29.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:29 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:30.044+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:30 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:30 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7477 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:30 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:31.041+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:31 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:31.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:31.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:31 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:31.995+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:31 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:32 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:33.015+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:33 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:33.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:33.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:34 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:34 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:34.044+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:35 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:35.081+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:35.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:35 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:35 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7482 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:35.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:36 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:36.064+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:36 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:36 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:37 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:37.070+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:41:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:37.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:37 np0005592159 ceph-mon[77081]: 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:41:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000028s ======
Jan 22 10:41:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:37.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 22 10:41:38 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:38.081+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:38 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:39 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:39.081+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:39.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:39.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:39 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:40 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:40.050+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:40 np0005592159 ceph-mon[77081]: Health check update: 199 slow ops, oldest one blocked for 7487 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:40 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:41 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:41.056+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:41.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:41.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:41 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:42 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:42.064+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:42 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:43 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:43.017+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:43.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:43.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:43 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:44.063+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:44 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 22 10:41:44 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 22 10:41:44 np0005592159 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 7492 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:45 np0005592159 podman[289626]: 2026-01-22 15:41:45.066689132 +0000 UTC m=+0.058301913 container exec ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 22 10:41:45 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:45.095+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:45.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:45 np0005592159 podman[289626]: 2026-01-22 15:41:45.154718116 +0000 UTC m=+0.146330877 container exec_died ad3fee4799b44f9e04b5aa9968630e9af6ffd410d7fc49c4207495984cf6bca6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-mon-compute-2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 22 10:41:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:45.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:45 np0005592159 podman[289784]: 2026-01-22 15:41:45.85641802 +0000 UTC m=+0.057016948 container exec ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 10:41:45 np0005592159 podman[289784]: 2026-01-22 15:41:45.863185494 +0000 UTC m=+0.063784412 container exec_died ff608106d7c871852a462621c4b38466f0a089e42add90baa06df30604a36e5f (image=quay.io/ceph/haproxy:2.3, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-haproxy-rgw-default-compute-2-zogxki)
Jan 22 10:41:46 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:46 np0005592159 podman[289850]: 2026-01-22 15:41:46.064973725 +0000 UTC m=+0.048351361 container exec 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, release=1793, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=keepalived for Ceph, io.buildah.version=1.28.2)
Jan 22 10:41:46 np0005592159 podman[289850]: 2026-01-22 15:41:46.076569222 +0000 UTC m=+0.059946838 container exec_died 6667054e4ceb45c7be5e11486852d0790d9219015e6bca7cdf08e071806b9af4 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-088fe176-0106-5401-803c-2da38b73b76a-keepalived-rgw-default-compute-2-xbsrtt, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, vcs-type=git, architecture=x86_64, description=keepalived for Ceph, distribution-scope=public, version=2.2.4, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9)
Jan 22 10:41:46 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:46.102+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:47 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:47 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:47 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:47.101+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:47.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:41:47.282 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:41:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:41:47.283 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:41:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:41:47.283 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:41:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:47.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:48 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:48.076+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:48 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:41:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:48 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:41:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:49.104+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:49 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:49 np0005592159 podman[290016]: 2026-01-22 15:41:49.124140206 +0000 UTC m=+0.162410681 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:41:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:49.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:49 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:49 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:49.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:50.093+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:50 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:50 np0005592159 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 7497 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:50 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:51.059+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:51 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:51.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:51 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:51.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:52.098+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:52 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:52 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:53.071+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:53 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:53.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:53 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:53.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:54.030+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:54 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:54 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:41:54 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:55.042+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:55 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:55.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:41:55 np0005592159 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 7502 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:41:55 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:56.075+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:56 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:56 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:57.120+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:57 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:57.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:57 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:41:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:57.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:41:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:58.125+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:58 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:41:59.126+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:59 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:41:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:41:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:41:59.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:41:59 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:41:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:41:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:41:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:41:59.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:42:00 np0005592159 podman[290150]: 2026-01-22 15:42:00.019704422 +0000 UTC m=+0.073151973 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 22 10:42:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:00.141+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:00 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:00 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:00 np0005592159 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 7507 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:01.100+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:01 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:01.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:01.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:01 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:01 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:02.095+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:02 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:02 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:03.134+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:03 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:03.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:03.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:04.171+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:04 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:04 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:05.150+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:05 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:05.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:05 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:05 np0005592159 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 7512 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:05.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:06.108+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:06 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:06 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:06 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:07.099+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:07 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:07.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:07 np0005592159 ceph-mon[77081]: 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:42:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:07.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:08.145+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:08 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:09.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:09.178+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:09 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:09 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:09.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:10.176+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:10 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:10 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:10 np0005592159 ceph-mon[77081]: Health check update: 6 slow ops, oldest one blocked for 7517 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:11.130+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:11 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:11.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:42:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:11.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:42:11 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:12.157+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:12 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:13.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:13.191+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:13 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:13 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:13.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:14.160+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:14 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:14 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:14 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:15.120+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:15 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:15.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:15 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:15 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 7522 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:15 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:15.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:16.107+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:16 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:16 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:17.149+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:17 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:17.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:17.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:18.112+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:18 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:42:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:18 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:18 np0005592159 ceph-mon[77081]: 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:42:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:42:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1173385838' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:42:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:42:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1173385838' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:42:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:19.127+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:19 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:19.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:19.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:20 np0005592159 podman[290230]: 2026-01-22 15:42:20.020247503 +0000 UTC m=+0.083684121 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Jan 22 10:42:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:20.128+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:20 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:20 np0005592159 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:21.123+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:21 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:21.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:21 np0005592159 ceph-mon[77081]: Health check update: 7 slow ops, oldest one blocked for 7527 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:21 np0005592159 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:21 np0005592159 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:21.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:22.117+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:22 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:22 np0005592159 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:23 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:23.111+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:23.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:23 np0005592159 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:23.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:24 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:24.161+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:25.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:25 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:25.204+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:42:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:25.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:42:25 np0005592159 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:25 np0005592159 ceph-mon[77081]: Health check update: 179 slow ops, oldest one blocked for 7532 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:26 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:26.180+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:26 np0005592159 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:27.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:27 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:27.211+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:42:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:27.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:42:27 np0005592159 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:28 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:28.180+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:28 np0005592159 ceph-mon[77081]: 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:42:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:29.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:29 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:29.192+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:29.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:29 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:30 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:30.148+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:31 np0005592159 podman[290261]: 2026-01-22 15:42:31.001392836 +0000 UTC m=+0.065718766 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:42:31 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:31 np0005592159 ceph-mon[77081]: Health check update: 179 slow ops, oldest one blocked for 7537 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:31 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:31.176+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:31.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:31.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:32 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:32 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:32.181+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:33 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:33.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:33 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:33.213+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:33.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:34 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:34 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:34 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:34.244+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:35 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:35 np0005592159 ceph-mon[77081]: Health check update: 158 slow ops, oldest one blocked for 7542 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:42:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:35.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:42:35 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:35.238+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:35.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:36 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:36 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:36.233+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:37.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:37 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:37 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:37.241+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:37.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:38 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:38 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:38.238+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:39.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:39 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:39.241+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:39 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:39.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:40 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:40.235+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:40 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:40 np0005592159 ceph-mon[77081]: Health check update: 158 slow ops, oldest one blocked for 7547 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:41.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:41 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:41 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:41.269+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:41.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:42 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:42.269+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:42 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:42:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:43.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:42:43 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:43.282+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:43 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:42:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:43.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:42:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:44.332+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:44 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:45.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:45 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:45.295+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:45 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:45 np0005592159 ceph-mon[77081]: Health check update: 158 slow ops, oldest one blocked for 7552 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:45.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:46 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:46.258+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:46 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:47.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:47 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:47.237+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:42:47.283 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:42:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:42:47.284 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:42:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:42:47.284 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:42:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:42:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:47.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:42:47 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:48 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:48.222+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:49 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:49.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:49 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:49.213+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:49.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:50 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:50 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:50 np0005592159 ceph-mon[77081]: Health check update: 158 slow ops, oldest one blocked for 7557 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:50 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:50.263+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:51 np0005592159 podman[290342]: 2026-01-22 15:42:51.061103416 +0000 UTC m=+0.118854170 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 22 10:42:51 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:51.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:51 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:51.307+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:51.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:52 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:52 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:52.329+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:53.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:53 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:53 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:53.285+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:42:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:53.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:42:54 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:54 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:54.317+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:42:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:55.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:55 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:42:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:42:55 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:42:55 np0005592159 ceph-mon[77081]: Health check update: 158 slow ops, oldest one blocked for 7562 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:42:55 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:55.357+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:42:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:55.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:42:56 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:56 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:56.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:57.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:57 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:57 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:57.339+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:57.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:58 np0005592159 ceph-mon[77081]: 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:42:58 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:42:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:58.294+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:42:59.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:42:59 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:42:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:42:59.270+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:42:59 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:42:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:42:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:42:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:42:59.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:00 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:00.298+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:00 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:00 np0005592159 ceph-mon[77081]: Health check update: 158 slow ops, oldest one blocked for 7567 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:01.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:01 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:01.292+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:01 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:01.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:02 np0005592159 podman[290581]: 2026-01-22 15:43:02.014725573 +0000 UTC m=+0.116145499 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 22 10:43:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:02.263+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:02 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:02 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:43:02 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:43:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:03.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:03 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:03.266+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:03 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:03.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:04 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:04.218+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:04 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:05 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:05.195+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:05.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:05 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:05 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7572 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:05.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:06 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:06.222+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:06 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:07.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:07.257+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:07 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:07 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:07.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:08.223+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:08 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:08 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:09.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:09.242+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:09 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:09 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:09.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:10.234+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:10 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:10 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:10 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7577 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:11.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:11 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:11.281+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:11.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:11 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:12 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:12.322+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:13 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:13.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:13.329+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:13 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:13.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:14 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:14.325+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:14 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:15.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:15 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:15.370+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:15 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:15 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:15 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7582 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:15.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:16 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:16.403+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:17.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:17 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:17.413+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:17 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:17.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:18 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:18.424+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:18 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:18 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:19.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:19 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:19.412+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:19 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:19.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:20 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:20.447+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:20 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:20 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7587 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:21.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:21 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:21.408+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:21 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:21.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:22 np0005592159 podman[290686]: 2026-01-22 15:43:22.06462816 +0000 UTC m=+0.113336905 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:43:22 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:22.386+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #253. Immutable memtables: 0.
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.646946) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 163] Flushing memtable with next log file: 253
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602646978, "job": 163, "event": "flush_started", "num_memtables": 1, "num_entries": 2756, "num_deletes": 540, "total_data_size": 5068168, "memory_usage": 5144432, "flush_reason": "Manual Compaction"}
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 163] Level-0 flush table #254: started
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602666748, "cf_name": "default", "job": 163, "event": "table_file_creation", "file_number": 254, "file_size": 3292028, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 125133, "largest_seqno": 127884, "table_properties": {"data_size": 3281741, "index_size": 5692, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3525, "raw_key_size": 32521, "raw_average_key_size": 23, "raw_value_size": 3257045, "raw_average_value_size": 2344, "num_data_blocks": 239, "num_entries": 1389, "num_filter_entries": 1389, "num_deletions": 540, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096432, "oldest_key_time": 1769096432, "file_creation_time": 1769096602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 254, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 163] Flush lasted 19845 microseconds, and 9201 cpu microseconds.
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.666791) [db/flush_job.cc:967] [default] [JOB 163] Level-0 flush table #254: 3292028 bytes OK
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.666809) [db/memtable_list.cc:519] [default] Level-0 commit table #254 started
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.669093) [db/memtable_list.cc:722] [default] Level-0 commit table #254: memtable #1 done
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.669108) EVENT_LOG_v1 {"time_micros": 1769096602669104, "job": 163, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.669142) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 163] Try to delete WAL files size 5054660, prev total WAL file size 5054660, number of live WAL files 2.
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000250.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.670725) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F73003130373933' seq:72057594037927935, type:22 .. '7061786F73003131303435' seq:0, type:0; will stop at (end)
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 164] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 163 Base level 0, inputs: [254(3214KB)], [252(9976KB)]
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602670787, "job": 164, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [254], "files_L6": [252], "score": -1, "input_data_size": 13507502, "oldest_snapshot_seqno": -1}
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 164] Generated table #255: 14620 keys, 11671371 bytes, temperature: kUnknown
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602790201, "cf_name": "default", "job": 164, "event": "table_file_creation", "file_number": 255, "file_size": 11671371, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11592337, "index_size": 41353, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36613, "raw_key_size": 400558, "raw_average_key_size": 27, "raw_value_size": 11344915, "raw_average_value_size": 775, "num_data_blocks": 1495, "num_entries": 14620, "num_filter_entries": 14620, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096602, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 255, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.790825) [db/compaction/compaction_job.cc:1663] [default] [JOB 164] Compacted 1@0 + 1@6 files to L6 => 11671371 bytes
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.792739) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 112.7 rd, 97.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 9.7 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 15717, records dropped: 1097 output_compression: NoCompression
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.792769) EVENT_LOG_v1 {"time_micros": 1769096602792755, "job": 164, "event": "compaction_finished", "compaction_time_micros": 119817, "compaction_time_cpu_micros": 28082, "output_level": 6, "num_output_files": 1, "total_output_size": 11671371, "num_input_records": 15717, "num_output_records": 14620, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000254.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602794828, "job": 164, "event": "table_file_deletion", "file_number": 254}
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000252.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096602798979, "job": 164, "event": "table_file_deletion", "file_number": 252}
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.670617) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.799168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.799171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.799173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.799175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:22 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:22.799176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:23.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:23 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:23.353+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:23 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:23.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:24 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:24.378+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:24 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:25.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:25 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:25.371+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:25.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:25 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:25 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7592 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:26 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:26.338+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:26 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:27.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:27 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:27.388+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:27.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:27 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:28 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:28.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:29 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:29.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:29 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:29.400+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:29.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:30 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:30 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:30 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7597 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:30 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:30.392+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:31 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:31.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:31 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:31.394+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:31.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:32 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:32 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:32.358+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:33 np0005592159 podman[290717]: 2026-01-22 15:43:33.028418254 +0000 UTC m=+0.079783928 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 22 10:43:33 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:33.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:33 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:33.381+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:33.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:34 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:34 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:34.393+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:35 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:35 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7602 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:35.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:35 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:35.382+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:35.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:36 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:36.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:36 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:37.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:37 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:37.325+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:37 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:37.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:38 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:38.359+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:38 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:39.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:39 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:39.379+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:39 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:39.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #256. Immutable memtables: 0.
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.131271) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:856] [default] [JOB 165] Flushing memtable with next log file: 256
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620131398, "job": 165, "event": "flush_started", "num_memtables": 1, "num_entries": 502, "num_deletes": 287, "total_data_size": 454806, "memory_usage": 464472, "flush_reason": "Manual Compaction"}
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:885] [default] [JOB 165] Level-0 flush table #257: started
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620136895, "cf_name": "default", "job": 165, "event": "table_file_creation", "file_number": 257, "file_size": 297704, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 127890, "largest_seqno": 128386, "table_properties": {"data_size": 295140, "index_size": 535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7222, "raw_average_key_size": 19, "raw_value_size": 289573, "raw_average_value_size": 774, "num_data_blocks": 23, "num_entries": 374, "num_filter_entries": 374, "num_deletions": 287, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769096603, "oldest_key_time": 1769096603, "file_creation_time": 1769096620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 257, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 165] Flush lasted 5665 microseconds, and 2730 cpu microseconds.
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.136948) [db/flush_job.cc:967] [default] [JOB 165] Level-0 flush table #257: 297704 bytes OK
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.136970) [db/memtable_list.cc:519] [default] Level-0 commit table #257 started
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138571) [db/memtable_list.cc:722] [default] Level-0 commit table #257: memtable #1 done
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138592) EVENT_LOG_v1 {"time_micros": 1769096620138584, "job": 165, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.138614) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 165] Try to delete WAL files size 451641, prev total WAL file size 451641, number of live WAL files 2.
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000253.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.139269) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0036303432' seq:72057594037927935, type:22 .. '6C6F676D0036323937' seq:0, type:0; will stop at (end)
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 166] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 165 Base level 0, inputs: [257(290KB)], [255(11MB)]
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620139361, "job": 166, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [257], "files_L6": [255], "score": -1, "input_data_size": 11969075, "oldest_snapshot_seqno": -1}
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 166] Generated table #258: 14411 keys, 11804872 bytes, temperature: kUnknown
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620228473, "cf_name": "default", "job": 166, "event": "table_file_creation", "file_number": 258, "file_size": 11804872, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11726693, "index_size": 41074, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 36037, "raw_key_size": 397059, "raw_average_key_size": 27, "raw_value_size": 11482428, "raw_average_value_size": 796, "num_data_blocks": 1479, "num_entries": 14411, "num_filter_entries": 14411, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769088928, "oldest_key_time": 0, "file_creation_time": 1769096620, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "2fc6eab8-1992-4005-a2ff-000040659fe1", "db_session_id": "HOKNYZUMFPVI0T4U6KMU", "orig_file_number": 258, "seqno_to_time_mapping": "N/A"}}
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.228763) [db/compaction/compaction_job.cc:1663] [default] [JOB 166] Compacted 1@0 + 1@6 files to L6 => 11804872 bytes
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.230591) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.2 rd, 132.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(79.9) write-amplify(39.7) OK, records in: 14994, records dropped: 583 output_compression: NoCompression
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.230614) EVENT_LOG_v1 {"time_micros": 1769096620230604, "job": 166, "event": "compaction_finished", "compaction_time_micros": 89199, "compaction_time_cpu_micros": 42043, "output_level": 6, "num_output_files": 1, "total_output_size": 11804872, "num_input_records": 14994, "num_output_records": 14411, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000257.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620230845, "job": 166, "event": "table_file_deletion", "file_number": 257}
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000255.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769096620233749, "job": 166, "event": "table_file_deletion", "file_number": 255}
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.139145) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.233881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.233888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.233891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.233893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: rocksdb: (Original Log Time 2026/01/22-15:43:40.233895) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 22 10:43:40 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:40.354+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:40 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7607 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:41.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:41 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:41.354+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:41.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:42 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:42 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:42.329+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:43 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:43.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:43 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:43.354+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:43.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:44 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:44 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:44.402+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:45.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:45 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:45 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7612 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:45 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:45.447+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:45.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:46 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:46.401+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:46 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:47.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:43:47.285 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:43:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:43:47.285 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:43:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:43:47.286 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:43:47 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:47.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:47 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:43:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:47.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:43:48 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:48.384+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:48 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:49.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:49 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:49.421+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:49.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:50 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:50 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:50.467+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:51 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:51 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7617 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:51.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:51 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:51.492+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:43:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:51.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:43:52 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:52 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:52.460+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:53 np0005592159 podman[290797]: 2026-01-22 15:43:53.00913944 +0000 UTC m=+0.074117919 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 22 10:43:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:53.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:53 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:53 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:53 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:53.505+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:53.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:54 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:54 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:54.465+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:43:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:55.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:55 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:55.486+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:55 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:55 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7622 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:43:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:55.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:56 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:56.535+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:56 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:57.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:57 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:57.501+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:57.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:57 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:58 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:43:58 np0005592159 rsyslogd[1002]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 22 10:43:58 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:58.528+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:58 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:43:59.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:59 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:43:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:43:59.500+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:43:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:43:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:43:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:43:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:43:59 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:00.547+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:00 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:00 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:00 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7627 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:01.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:01.532+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:01 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:01.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:01 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:02.529+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:02 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:02 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:03.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:03.520+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:03 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:03.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:03 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:44:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:44:03 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:44:04 np0005592159 podman[291012]: 2026-01-22 15:44:04.006083187 +0000 UTC m=+0.066741033 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 22 10:44:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:04.529+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:04 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:05.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:05 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:05.515+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:05 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:05.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:06 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:06 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7633 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:06 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:06.473+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:06 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:07.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:07 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:07.460+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:07 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:07.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:08 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:08.472+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:08 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:09.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:09.440+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:09 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:09 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:09 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:44:09 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:44:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:09.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:10.456+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:10 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:10 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:10 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7638 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:11.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:11.461+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:11 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:11 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:11.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:12 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:12.501+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:12 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:13.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:13.536+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:13 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:13 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:13.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:14.526+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:14 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:14 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:15.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:15.488+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:15 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:15.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:15 np0005592159 ceph-mon[77081]: 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:15 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7642 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:16.508+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:16 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:16 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:17.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:17.459+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:17 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:17.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:17 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:18.447+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:18 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:18 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:19.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:19.495+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:19 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:19.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:20 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:20.485+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:20 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:21 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:21 np0005592159 ceph-mon[77081]: Health check update: 207 slow ops, oldest one blocked for 7648 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:21.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:21.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:21 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:21.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:22 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:22.510+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:22 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:23.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:23.506+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:23 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:23.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:23 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:23 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:24 np0005592159 podman[291142]: 2026-01-22 15:44:24.08659622 +0000 UTC m=+0.134190135 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Jan 22 10:44:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:24.508+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:24 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:25 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:25.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:25.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:25 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:25.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:26 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:26 np0005592159 ceph-mon[77081]: Health check update: 127 slow ops, oldest one blocked for 7653 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:26.494+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:26 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:27 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:27.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:27.455+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:27 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:27.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:28 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:28.436+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:28 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:29.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:29 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:29.412+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:29 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:29.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:30 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:30 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:30 np0005592159 ceph-mon[77081]: Health check update: 127 slow ops, oldest one blocked for 7658 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:30.435+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:30 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:31.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:31.468+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:31 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:31.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:32 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:32.430+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:32 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:33 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:33.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:33.395+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:33 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:33.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:34 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:34.368+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:34 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:35 np0005592159 podman[291175]: 2026-01-22 15:44:35.008706763 +0000 UTC m=+0.067624907 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 22 10:44:35 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:35.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:35.370+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:35 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:35.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:36 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:36 np0005592159 ceph-mon[77081]: Health check update: 127 slow ops, oldest one blocked for 7662 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:36.393+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:36 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:37 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:37.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:37.387+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:37 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:37.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:38 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:38 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:38.399+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:38 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:39 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:39.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:39.417+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:39 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:39.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:40 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:40 np0005592159 ceph-mon[77081]: Health check update: 127 slow ops, oldest one blocked for 7667 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:40.376+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:40 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:41.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:41 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:41.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:41 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:41.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:42 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:42.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:42 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:43.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:43.361+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:43 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:43 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:43.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:44.370+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:44 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:45.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:45.362+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:45 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:45.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:45 np0005592159 ceph-mon[77081]: 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:44:45 np0005592159 ceph-mon[77081]: Health check update: 127 slow ops, oldest one blocked for 7672 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:46.353+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:46 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:47 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:44:47.286 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:44:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:44:47.287 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:44:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:44:47.287 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:44:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:47.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:47.366+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:47 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:47.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:48.409+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:48 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:48 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:49.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:49.397+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:49 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:49 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:49 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:49.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:50.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:50 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:50 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:50 np0005592159 ceph-mon[77081]: Health check update: 127 slow ops, oldest one blocked for 7677 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:51.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:51.389+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:51 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:51.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:51 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:52.408+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:52 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:52 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:53.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:53.373+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:53 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:53.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:53 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:54.409+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:54 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:54 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:55 np0005592159 podman[291254]: 2026-01-22 15:44:55.126572161 +0000 UTC m=+0.168558863 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 22 10:44:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:44:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:55.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:55 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:55.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:55 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:55 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:44:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:55.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:44:55 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:55 np0005592159 ceph-mon[77081]: Health check update: 211 slow ops, oldest one blocked for 7682 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:44:56 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:56.421+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:56 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:56 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:56 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:57.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:57 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:57.461+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:57 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:57 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:57 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:57 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:57 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:57.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:57 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:58 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:58.419+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:58 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:58 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:58 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:44:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:44:59.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:44:59 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:44:59.428+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:59 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:44:59 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:44:59 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:44:59 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:44:59 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:44:59.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:44:59 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:00 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:00 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:00.451+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:00 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:00 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:01 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:01 np0005592159 ceph-mon[77081]: Health check update: 211 slow ops, oldest one blocked for 7687 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:01.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:01 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:01.490+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:01 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:01 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:01 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:01 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:01 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:01.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:02 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:02 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:02.443+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:02 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:02 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:03 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:03 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:03.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:03 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:03.433+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:03 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:03 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:03 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:03 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:03 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:03.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:04 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:04 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:04.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:04 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:04 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:05 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:05.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:05 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:05 np0005592159 ceph-mon[77081]: Health check update: 211 slow ops, oldest one blocked for 7692 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:05 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:05.418+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:05 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:05 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:05 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:05 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:05 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:05.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:06 np0005592159 podman[291336]: 2026-01-22 15:45:06.026162179 +0000 UTC m=+0.073433431 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 22 10:45:06 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:06 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:06.440+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:06 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:06 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:07.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:07 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:07 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:07.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:07 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:07 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:07 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:07 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:07 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:07.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:08 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:08 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:08.506+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:08 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:08 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:09.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:09 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:09.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:09 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:09 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:09 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:09 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:09 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:09 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:09.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:10 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:10 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:10.514+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:10 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:10 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:10 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:10 np0005592159 ceph-mon[77081]: Health check update: 211 slow ops, oldest one blocked for 7697 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 22 10:45:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 22 10:45:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:45:10 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 22 10:45:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:11.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:11 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:11.486+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:11 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:11 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:11 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:11 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:11 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:11 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:11.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:12 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:12.521+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:12 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:12 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:12 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:13.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:13 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:13.502+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:13 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:13 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:13 np0005592159 ceph-mon[77081]: 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:13 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:13 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:13 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:13.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:14 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:14.493+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:14 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:14 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:14 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:15 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:45:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:15.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:45:15 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:15.543+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:15 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:15 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:15 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:15 np0005592159 ceph-mon[77081]: Health check update: 211 slow ops, oldest one blocked for 7702 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:15 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:15 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:15 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:15.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:16 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:16.495+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:16 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:16 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:16 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:45:16 np0005592159 ceph-mon[77081]: from='mgr.14132 192.168.122.100:0/2758575857' entity='mgr.compute-0.nyayzk' 
Jan 22 10:45:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:17.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:17 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:17.455+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:17 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:17 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:17 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:17 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:17 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:17.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:17 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:18 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:18.479+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:18 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:18 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Jan 22 10:45:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1023279560' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 22 10:45:18 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Jan 22 10:45:18 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1023279560' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 22 10:45:18 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:45:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:19.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:45:19 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:19.439+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:19 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:19 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:19 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:19 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:19 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:19.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:20 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:20 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:20 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:20.471+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:20 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:20 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:21.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:21 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:21 np0005592159 ceph-mon[77081]: Health check update: 177 slow ops, oldest one blocked for 7708 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:21 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:21 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:21.521+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:21 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:21 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:21 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:21 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:21 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:21.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:22 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:22.486+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:22 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:22 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:22 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:23.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:23 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:23.508+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:23 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:23 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:23 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:23 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:23 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:23 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:23.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:24 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:24.498+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:24 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:24 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:24 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:25 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:25.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:25 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:25.512+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:25 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:25 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:25 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:25 np0005592159 ceph-mon[77081]: Health check update: 177 slow ops, oldest one blocked for 7712 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:25 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:25 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:25 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:25.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:26 np0005592159 podman[291596]: 2026-01-22 15:45:26.093347551 +0000 UTC m=+0.144379146 container health_status 8eec14eed05eebd169934b14ad23738a7c696fad3b7be75ce9a652966539c356 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 22 10:45:26 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:26.653+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:26 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:26 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:27 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:45:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:27.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:45:27 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:27.698+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:27 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:27 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:27 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:27 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:27 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:27.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:28 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:28.738+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:28 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:28 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:28 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:28 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:29 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:45:29 np0005592159 ceph-mon[77081]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7800.0 total, 600.0 interval#012Cumulative writes: 24K writes, 129K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.22 GB, 0.03 MB/s#012Cumulative WAL: 24K writes, 24K syncs, 1.00 writes per sync, written: 0.22 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1848 writes, 10K keys, 1848 commit groups, 1.0 writes per commit group, ingest: 16.83 MB, 0.03 MB/s#012Interval WAL: 1848 writes, 1848 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     56.5      2.41              0.51        83    0.029       0      0       0.0       0.0#012  L6      1/0   11.26 MB   0.0      0.9     0.1      0.8       0.8      0.0       0.0   6.0    111.9     97.1      8.44              2.77        82    0.103    926K    51K       0.0       0.0#012 Sum      1/0   11.26 MB   0.0      0.9     0.1      0.8       0.9      0.1       0.0   7.0     87.0     88.1     10.86              3.28       165    0.066    926K    51K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4    110.5    111.1      0.70              0.28        12    0.059     91K   4912       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.9     0.1      0.8       0.8      0.0       0.0   0.0    111.9     97.1      8.44              2.77        82    0.103    926K    51K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     56.5      2.41              0.51        82    0.029       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 7800.0 total, 600.0 interval#012Flush(GB): cumulative 0.133, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.93 GB write, 0.12 MB/s write, 0.92 GB read, 0.12 MB/s read, 10.9 seconds#012Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4cf3991f0#2 capacity: 304.00 MB usage: 96.17 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.000626 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(5007,90.60 MB,29.8026%) FilterBlock(165,2.52 MB,0.828045%) IndexBlock(165,3.05 MB,1.00344%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Jan 22 10:45:29 np0005592159 systemd-logind[787]: New session 51 of user zuul.
Jan 22 10:45:29 np0005592159 systemd[1]: Started Session 51 of User zuul.
Jan 22 10:45:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:29.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:29 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:29 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:29.725+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:29 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:29 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:29 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:29 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:45:29 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:29.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:45:30 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:30 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:30.681+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:30 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:30 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:30 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:30 np0005592159 ceph-mon[77081]: Health check update: 177 slow ops, oldest one blocked for 7718 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:31.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:31 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:31.693+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:31 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:31 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:31 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:31 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:31 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:31 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:31.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:32 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:32.651+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:32 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:32 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:33 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:33 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0) v1
Jan 22 10:45:33 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/238792465' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 22 10:45:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:45:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:33.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:45:33 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:33.627+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:33 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:33 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:33 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:33 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:45:33 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:33.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:45:34 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:34 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:34.656+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:34 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:34 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:35 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:35 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:35 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:35 np0005592159 ceph-mon[77081]: Health check update: 177 slow ops, oldest one blocked for 7723 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:35.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:35 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:35.651+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:35 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:35 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:35 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:35 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:35 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:35.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:36 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:36 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:36 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:36 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:36.648+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:36 np0005592159 ovs-vsctl[291917]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 22 10:45:37 np0005592159 podman[291945]: 2026-01-22 15:45:37.00928715 +0000 UTC m=+0.052752926 container health_status 65cda04b9c9e71d648ab5510147314c4de15a37ca8d4a48196c50c9ad6ccb44d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '109b2e65a809d9df2b2d81c602046702b988fc7a594c944e65d89c0e3a64ae71-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-32177f2c3fa09030b0d1ae5cc46811ab0cd45ff7cf090b1a287b538f8d13e58d-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 22 10:45:37 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:37.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:37 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:37.698+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:37 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:37 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:37 np0005592159 virtqemud[225907]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 22 10:45:37 np0005592159 virtqemud[225907]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 22 10:45:37 np0005592159 virtqemud[225907]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 22 10:45:37 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:37 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:37 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:37.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:38 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:38 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: cache status {prefix=cache status} (starting...)
Jan 22 10:45:38 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: client ls {prefix=client ls} (starting...)
Jan 22 10:45:38 np0005592159 lvm[292291]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 22 10:45:38 np0005592159 lvm[292291]: VG ceph_vg0 finished
Jan 22 10:45:38 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:38 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:38 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:38.745+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:39 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: damage ls {prefix=damage ls} (starting...)
Jan 22 10:45:39 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: dump loads {prefix=dump loads} (starting...)
Jan 22 10:45:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:39.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:39 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 22 10:45:39 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 22 10:45:39 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0) v1
Jan 22 10:45:39 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1538690967' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 22 10:45:39 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 22 10:45:39 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:39 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:39 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:39.793+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:39 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 22 10:45:39 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:39 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:39 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:39.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:40 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 22 10:45:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:40 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 22 10:45:40 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: ops {prefix=ops} (starting...)
Jan 22 10:45:40 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Jan 22 10:45:40 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3339043403' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 22 10:45:40 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:40.815+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:40 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:40 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:40 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0) v1
Jan 22 10:45:40 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2675750719' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 22 10:45:41 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: session ls {prefix=session ls} (starting...)
Jan 22 10:45:41 np0005592159 ceph-mds[81154]: mds.cephfs.compute-2.zycvef asok_command: status {prefix=status} (starting...)
Jan 22 10:45:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Jan 22 10:45:41 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1892410733' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 22 10:45:41 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:41 np0005592159 ceph-mon[77081]: Health check update: 177 slow ops, oldest one blocked for 7728 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:41 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:41.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Jan 22 10:45:41 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1561478504' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 22 10:45:41 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 22 10:45:41 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/195662771' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 10:45:41 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:41.808+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:41 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:41 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:41 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:41 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000027s ======
Jan 22 10:45:41 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:41.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 22 10:45:42 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 22 10:45:42 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3523237783' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 10:45:42 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 22 10:45:42 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/157024880' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 10:45:42 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:42 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0) v1
Jan 22 10:45:42 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4174554485' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 22 10:45:42 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:42.856+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:42 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:42 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:42 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 10:45:42 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/928646561' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 10:45:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Jan 22 10:45:43 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4016002785' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 22 10:45:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Jan 22 10:45:43 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1868857439' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 22 10:45:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:43.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:43 np0005592159 ceph-mon[77081]: 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 22 10:45:43 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4271205578' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 10:45:43 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:43.825+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:43 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:43 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 22 10:45:43 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4176284565' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 10:45:43 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Jan 22 10:45:43 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/52836653' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 22 10:45:43 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:43 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:43 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:43.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Jan 22 10:45:44 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2181286671' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 22 10:45:44 np0005592159 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9997> 2026-01-22T15:31:48.210+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9984> 2026-01-22T15:31:49.255+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,4,1,1,15,34,33,65,24])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9972> 2026-01-22T15:31:50.247+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,4,1,0,16,34,33,65,24])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9962> 2026-01-22T15:31:51.282+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9948> 2026-01-22T15:31:52.324+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735caeec00 session 0x55735a69a960
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9931> 2026-01-22T15:31:53.348+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9920> 2026-01-22T15:31:54.394+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,5,4,1,0,16,34,33,65,24])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9906> 2026-01-22T15:31:55.439+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9897> 2026-01-22T15:31:56.414+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9886> 2026-01-22T15:31:57.442+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9872> 2026-01-22T15:31:58.452+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,5,1,0,16,34,32,65,25])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9860> 2026-01-22T15:31:59.447+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9849> 2026-01-22T15:32:00.438+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,5,1,0,16,34,32,65,25])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9837> 2026-01-22T15:32:01.464+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 59 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 59 slow requests (by type [ 'delayed' : 59 ] most affected pool [ 'vms' : 36 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9826> 2026-01-22T15:32:02.502+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9814> 2026-01-22T15:32:03.461+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9803> 2026-01-22T15:32:04.412+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9790> 2026-01-22T15:32:05.430+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,8,2,0,16,34,32,65,25])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9778> 2026-01-22T15:32:06.481+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9767> 2026-01-22T15:32:07.437+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,5,5,0,16,34,32,65,25])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9752> 2026-01-22T15:32:08.393+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9742> 2026-01-22T15:32:09.344+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,5,0,16,34,32,65,25])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9732> 2026-01-22T15:32:10.388+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9721> 2026-01-22T15:32:11.382+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9708> 2026-01-22T15:32:12.334+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,5,0,16,34,32,64,26])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9693> 2026-01-22T15:32:13.363+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9684> 2026-01-22T15:32:14.390+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206708736 unmapped: 2596864 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,5,0,16,34,32,64,26])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9670> 2026-01-22T15:32:15.388+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9659> 2026-01-22T15:32:16.383+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9648> 2026-01-22T15:32:17.404+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9635> 2026-01-22T15:32:18.453+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9624> 2026-01-22T15:32:19.489+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,5,0,16,34,32,64,26])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9613> 2026-01-22T15:32:20.470+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,5,0,16,34,32,61,29])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9600> 2026-01-22T15:32:21.424+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9591> 2026-01-22T15:32:22.401+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 159 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 159 slow requests (by type [ 'delayed' : 159 ] most affected pool [ 'vms' : 95 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9576> 2026-01-22T15:32:23.441+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9557> 2026-01-22T15:32:24.490+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9546> 2026-01-22T15:32:25.501+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,4,1,16,34,31,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9536> 2026-01-22T15:32:26.515+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9527> 2026-01-22T15:32:27.552+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9513> 2026-01-22T15:32:28.578+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9504> 2026-01-22T15:32:29.540+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,4,1,16,34,31,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9488> 2026-01-22T15:32:30.583+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9475> 2026-01-22T15:32:31.614+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9464> 2026-01-22T15:32:32.570+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9448> 2026-01-22T15:32:33.539+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,5,1,16,34,31,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9436> 2026-01-22T15:32:34.554+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9425> 2026-01-22T15:32:35.553+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9414> 2026-01-22T15:32:36.556+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9403> 2026-01-22T15:32:37.545+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,9,1,16,34,31,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9390> 2026-01-22T15:32:38.559+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9379> 2026-01-22T15:32:39.606+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9368> 2026-01-22T15:32:40.597+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9357> 2026-01-22T15:32:41.549+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,9,1,16,33,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9343> 2026-01-22T15:32:42.562+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9331> 2026-01-22T15:32:43.526+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9320> 2026-01-22T15:32:44.565+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9309> 2026-01-22T15:32:45.613+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 139 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 139 slow requests (by type [ 'delayed' : 139 ] most affected pool [ 'vms' : 84 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735c7cc000 session 0x55735a5254a0
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9296> 2026-01-22T15:32:46.621+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,9,1,16,33,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206716928 unmapped: 2588672 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9284> 2026-01-22T15:32:47.661+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,9,1,16,33,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9271> 2026-01-22T15:32:48.682+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9258> 2026-01-22T15:32:49.719+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,9,1,16,33,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9248> 2026-01-22T15:32:50.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 183 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 183 slow requests (by type [ 'delayed' : 183 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735b583c00 session 0x55735c5e83c0
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9235> 2026-01-22T15:32:51.652+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,9,1,16,33,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9223> 2026-01-22T15:32:52.637+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9211> 2026-01-22T15:32:53.645+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9198> 2026-01-22T15:32:54.628+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,9,1,16,33,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9184> 2026-01-22T15:32:55.678+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9173> 2026-01-22T15:32:56.722+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9162> 2026-01-22T15:32:57.699+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9148> 2026-01-22T15:32:58.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9137> 2026-01-22T15:32:59.767+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,10,1,12,37,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9125> 2026-01-22T15:33:00.776+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,10,1,12,37,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9111> 2026-01-22T15:33:01.731+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9102> 2026-01-22T15:33:02.705+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9088> 2026-01-22T15:33:03.704+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,10,1,12,37,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9074> 2026-01-22T15:33:04.747+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9065> 2026-01-22T15:33:05.742+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,10,1,12,37,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9053> 2026-01-22T15:33:06.776+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9044> 2026-01-22T15:33:07.775+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,10,1,12,37,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9029> 2026-01-22T15:33:08.789+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9020> 2026-01-22T15:33:09.827+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,9,2,11,38,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -9006> 2026-01-22T15:33:10.852+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8993> 2026-01-22T15:33:11.849+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8979> 2026-01-22T15:33:12.895+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8966> 2026-01-22T15:33:13.906+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,6,5,11,38,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8954> 2026-01-22T15:33:14.918+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,6,5,11,38,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8942> 2026-01-22T15:33:15.963+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8931> 2026-01-22T15:33:16.924+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8917> 2026-01-22T15:33:17.931+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8906> 2026-01-22T15:33:18.949+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8897> 2026-01-22T15:33:19.956+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8886> 2026-01-22T15:33:21.006+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 41 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 41 slow requests (by type [ 'delayed' : 41 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,6,5,11,38,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8874> 2026-01-22T15:33:21.988+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8862> 2026-01-22T15:33:23.002+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8853> 2026-01-22T15:33:24.005+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8844> 2026-01-22T15:33:25.011+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,6,5,8,41,32,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8830> 2026-01-22T15:33:26.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8817> 2026-01-22T15:33:27.028+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206725120 unmapped: 2580480 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8805> 2026-01-22T15:33:28.068+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,6,5,8,40,33,62,29])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8789> 2026-01-22T15:33:29.090+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8778> 2026-01-22T15:33:30.079+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8765> 2026-01-22T15:33:31.092+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,6,5,8,37,36,62,29])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8753> 2026-01-22T15:33:32.111+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8739> 2026-01-22T15:33:33.138+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8730> 2026-01-22T15:33:34.094+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8719> 2026-01-22T15:33:35.069+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8708> 2026-01-22T15:33:36.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,6,5,8,37,36,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8696> 2026-01-22T15:33:37.048+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,6,8,37,36,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8681> 2026-01-22T15:33:38.029+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206733312 unmapped: 2572288 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8668> 2026-01-22T15:33:39.077+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206741504 unmapped: 2564096 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8657> 2026-01-22T15:33:40.103+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206741504 unmapped: 2564096 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8646> 2026-01-22T15:33:41.134+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206741504 unmapped: 2564096 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8635> 2026-01-22T15:33:42.121+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206741504 unmapped: 2564096 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,10,8,37,36,62,29])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8622> 2026-01-22T15:33:43.082+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8611> 2026-01-22T15:33:44.079+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8598> 2026-01-22T15:33:45.103+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,10,8,37,36,62,29])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8586> 2026-01-22T15:33:46.135+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8575> 2026-01-22T15:33:47.174+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8563> 2026-01-22T15:33:48.127+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8552> 2026-01-22T15:33:49.153+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8541> 2026-01-22T15:33:50.126+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8530> 2026-01-22T15:33:51.111+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,10,8,37,36,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8516> 2026-01-22T15:33:52.066+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8502> 2026-01-22T15:33:53.039+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8493> 2026-01-22T15:33:54.080+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8482> 2026-01-22T15:33:55.091+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,10,8,37,36,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8468> 2026-01-22T15:33:56.100+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8459> 2026-01-22T15:33:57.101+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8445> 2026-01-22T15:33:58.121+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8434> 2026-01-22T15:33:59.083+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8421> 2026-01-22T15:34:00.047+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8412> 2026-01-22T15:34:01.090+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,10,8,33,40,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8400> 2026-01-22T15:34:02.060+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8384> 2026-01-22T15:34:03.093+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8375> 2026-01-22T15:34:04.057+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8363> 2026-01-22T15:34:05.084+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8352> 2026-01-22T15:34:06.036+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8343> 2026-01-22T15:34:07.027+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8331> 2026-01-22T15:34:08.067+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8313> 2026-01-22T15:34:09.050+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8304> 2026-01-22T15:34:10.008+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8293> 2026-01-22T15:34:11.001+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8285> 2026-01-22T15:34:11.958+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,62,29])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8273> 2026-01-22T15:34:12.913+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707843 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 432.608551025s of 433.505828857s, submitted: 246
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8255> 2026-01-22T15:34:13.879+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 90 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 90 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 90 slow requests (by type [ 'delayed' : 90 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206749696 unmapped: 2555904 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735dbcc400 session 0x55735b58e960
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,62,29])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735c7ce800 session 0x55735d390960
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8240> 2026-01-22T15:34:14.863+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206757888 unmapped: 2547712 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,62,29])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8230> 2026-01-22T15:34:15.895+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206790656 unmapped: 2514944 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735dbcd400 session 0x55735d0385a0
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735bf03800 session 0x55735ceab0e0
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 ms_handle_reset con 0x55735c639400 session 0x55735cbea000
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 heartbeat osd_stat(store_statfs(0x1b13df000/0x0/0x1bfc00000, data 0xb7a0f6c/0xa67f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,62,29])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8210> 2026-01-22T15:34:16.956+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206790656 unmapped: 2514944 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8199> 2026-01-22T15:34:17.969+0000 7f47f8ed4640 -1 osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206790656 unmapped: 2514944 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2712927 data_alloc: 218103808 data_used: 13565952
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 179 handle_osd_map epochs [179,180], i have 179, src has [1,180]
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735c639400 session 0x55735b035860
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8181> 2026-01-22T15:34:18.986+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 71 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 71 slow requests (by type [ 'delayed' : 71 ] most affected pool [ 'vms' : 43 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 206798848 unmapped: 2506752 heap: 209305600 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735c7ce800 session 0x55735ceaa3c0
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207224832 unmapped: 9428992 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8167> 2026-01-22T15:34:19.999+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 55 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 55 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 55 slow requests (by type [ 'delayed' : 55 ] most affected pool [ 'vms' : 34 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735dbcc400 session 0x55735c6234a0
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735cee7800 session 0x55735a739860
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b0ec6000/0x0/0x1bfc00000, data 0xbcb6ac3/0xab97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,61,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8153> 2026-01-22T15:34:20.958+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b0ec7000/0x0/0x1bfc00000, data 0xbcb6ac3/0xab97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,11,8,33,40,61,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8141> 2026-01-22T15:34:21.991+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8129> 2026-01-22T15:34:23.027+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 184 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 184 slow requests (by type [ 'delayed' : 184 ] most affected pool [ 'vms' : 105 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735c353800 session 0x55735a22e000
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2759862 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8111> 2026-01-22T15:34:24.047+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b0ec7000/0x0/0x1bfc00000, data 0xbcb6ac3/0xab97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,11,8,33,40,61,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8099> 2026-01-22T15:34:25.097+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8088> 2026-01-22T15:34:26.050+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b0ec7000/0x0/0x1bfc00000, data 0xbcb6ac3/0xab97000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,11,8,33,40,61,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8077> 2026-01-22T15:34:27.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8066> 2026-01-22T15:34:28.008+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2759862 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.176485062s of 14.780130386s, submitted: 54
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735f233c00 session 0x55735ce2f4a0
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8049> 2026-01-22T15:34:29.056+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8037> 2026-01-22T15:34:30.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 21 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 21 slow requests (by type [ 'delayed' : 21 ] most affected pool [ 'vms' : 16 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8028> 2026-01-22T15:34:31.012+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8018> 2026-01-22T15:34:32.041+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b0ec8000/0x0/0x1bfc00000, data 0xbcb6ab3/0xab96000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,0,1,11,8,33,40,61,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -8008> 2026-01-22T15:34:33.046+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2759365 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735bf08800 session 0x55735cea81e0
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735c352000 session 0x55735cea9680
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7989> 2026-01-22T15:34:34.081+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7976> 2026-01-22T15:34:35.091+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7963> 2026-01-22T15:34:36.135+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,1,10,9,30,43,61,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7953> 2026-01-22T15:34:37.110+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7942> 2026-01-22T15:34:38.136+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7926> 2026-01-22T15:34:39.127+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7917> 2026-01-22T15:34:40.152+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,1,10,9,27,46,61,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7905> 2026-01-22T15:34:41.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7894> 2026-01-22T15:34:42.112+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7883> 2026-01-22T15:34:43.135+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,1,10,9,27,46,61,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7868> 2026-01-22T15:34:44.168+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7855> 2026-01-22T15:34:45.138+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,1,10,9,27,46,61,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7843> 2026-01-22T15:34:46.097+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7832> 2026-01-22T15:34:47.136+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7821> 2026-01-22T15:34:48.157+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7807> 2026-01-22T15:34:49.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7796> 2026-01-22T15:34:50.146+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,1,10,9,27,46,61,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7784> 2026-01-22T15:34:51.123+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7774> 2026-01-22T15:34:52.091+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 42 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 42 slow requests (by type [ 'delayed' : 42 ] most affected pool [ 'vms' : 26 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7763> 2026-01-22T15:34:53.078+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7751> 2026-01-22T15:34:54.066+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,1,10,9,27,46,61,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7739> 2026-01-22T15:34:55.054+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7726> 2026-01-22T15:34:56.033+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 91 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 91 slow requests (by type [ 'delayed' : 91 ] most affected pool [ 'vms' : 56 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735c353400 session 0x55735c28ed20
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7713> 2026-01-22T15:34:57.052+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7702> 2026-01-22T15:34:58.005+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7688> 2026-01-22T15:34:59.019+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,1,10,9,27,46,61,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7676> 2026-01-22T15:34:59.979+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7665> 2026-01-22T15:35:00.974+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7654> 2026-01-22T15:35:01.987+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7643> 2026-01-22T15:35:03.022+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,1,0,11,9,27,46,61,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7630> 2026-01-22T15:35:04.072+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7619> 2026-01-22T15:35:05.057+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7608> 2026-01-22T15:35:06.097+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7597> 2026-01-22T15:35:07.109+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,2,0,11,9,27,46,61,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7585> 2026-01-22T15:35:08.079+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7569> 2026-01-22T15:35:09.084+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7558> 2026-01-22T15:35:10.110+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7547> 2026-01-22T15:35:11.094+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7536> 2026-01-22T15:35:12.119+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,11,9,27,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7526> 2026-01-22T15:35:13.095+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7512> 2026-01-22T15:35:14.107+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7501> 2026-01-22T15:35:15.131+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,11,9,27,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7489> 2026-01-22T15:35:16.088+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7480> 2026-01-22T15:35:17.133+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7467> 2026-01-22T15:35:18.115+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7453> 2026-01-22T15:35:19.155+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7442> 2026-01-22T15:35:20.160+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,11,9,27,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7432> 2026-01-22T15:35:21.119+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,11,9,27,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,10,10,27,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7419> 2026-01-22T15:35:22.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7406> 2026-01-22T15:35:23.136+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,10,5,32,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7391> 2026-01-22T15:35:24.155+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7380> 2026-01-22T15:35:25.201+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7366> 2026-01-22T15:35:26.226+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 37 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 37 slow requests (by type [ 'delayed' : 37 ] most affected pool [ 'vms' : 24 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7355> 2026-01-22T15:35:27.259+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,0,7,8,32,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7343> 2026-01-22T15:35:28.292+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,7,8,32,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7328> 2026-01-22T15:35:29.317+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7317> 2026-01-22T15:35:30.322+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7306> 2026-01-22T15:35:31.281+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,7,8,32,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7294> 2026-01-22T15:35:32.246+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,7,8,32,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7284> 2026-01-22T15:35:33.248+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7272> 2026-01-22T15:35:34.255+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,7,7,33,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7260> 2026-01-22T15:35:35.283+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7245> 2026-01-22T15:35:36.278+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,7,7,33,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7233> 2026-01-22T15:35:37.298+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7222> 2026-01-22T15:35:38.265+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,7,7,33,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7207> 2026-01-22T15:35:39.273+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,7,7,33,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7196> 2026-01-22T15:35:40.231+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7185> 2026-01-22T15:35:41.248+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207241216 unmapped: 9412608 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,7,7,33,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7174> 2026-01-22T15:35:42.280+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7163> 2026-01-22T15:35:43.289+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7147> 2026-01-22T15:35:44.292+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,7,7,33,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7136> 2026-01-22T15:35:45.256+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7125> 2026-01-22T15:35:46.268+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,7,6,34,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7113> 2026-01-22T15:35:47.247+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7102> 2026-01-22T15:35:48.239+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,6,7,34,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7087> 2026-01-22T15:35:49.222+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,6,7,34,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7075> 2026-01-22T15:35:50.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7066> 2026-01-22T15:35:51.179+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7055> 2026-01-22T15:35:52.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7042> 2026-01-22T15:35:53.235+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7030> 2026-01-22T15:35:54.240+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7019> 2026-01-22T15:35:55.228+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,2,11,34,45,62,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -7007> 2026-01-22T15:35:56.253+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 187 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 187 slow requests (by type [ 'delayed' : 187 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735dbcf000 session 0x55735d045680
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6994> 2026-01-22T15:35:57.228+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6981> 2026-01-22T15:35:58.197+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6967> 2026-01-22T15:35:59.219+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 9404416 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6958> 2026-01-22T15:36:00.177+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,2,2,11,34,41,66,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6946> 2026-01-22T15:36:01.186+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6935> 2026-01-22T15:36:02.148+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6922> 2026-01-22T15:36:03.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6908> 2026-01-22T15:36:04.102+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,3,2,11,34,41,66,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6896> 2026-01-22T15:36:05.094+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6887> 2026-01-22T15:36:06.058+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6878> 2026-01-22T15:36:07.105+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6865> 2026-01-22T15:36:08.065+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,3,2,11,34,41,66,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6850> 2026-01-22T15:36:09.019+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,3,2,11,34,41,66,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6838> 2026-01-22T15:36:10.047+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6827> 2026-01-22T15:36:11.009+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,3,2,11,34,41,66,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6815> 2026-01-22T15:36:12.022+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6802> 2026-01-22T15:36:13.033+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6788> 2026-01-22T15:36:14.064+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6777> 2026-01-22T15:36:15.067+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,1,11,35,41,66,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6765> 2026-01-22T15:36:16.073+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,1,11,35,41,66,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6753> 2026-01-22T15:36:17.092+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6742> 2026-01-22T15:36:18.108+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6728> 2026-01-22T15:36:19.153+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6717> 2026-01-22T15:36:20.111+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6706> 2026-01-22T15:36:21.117+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6697> 2026-01-22T15:36:22.113+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,1,11,35,41,66,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6685> 2026-01-22T15:36:23.131+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6671> 2026-01-22T15:36:24.161+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6662> 2026-01-22T15:36:25.121+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 9396224 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,1,11,35,40,67,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6648> 2026-01-22T15:36:26.079+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 101 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 101 slow requests (by type [ 'delayed' : 101 ] most affected pool [ 'vms' : 62 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6639> 2026-01-22T15:36:27.116+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6628> 2026-01-22T15:36:28.150+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,3,1,11,35,40,67,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6615> 2026-01-22T15:36:29.200+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,1,11,35,40,67,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6599> 2026-01-22T15:36:30.159+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6588> 2026-01-22T15:36:31.128+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 7200.5 total, 600.0 interval#012Cumulative writes: 14K writes, 44K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.00 MB/s#012Cumulative WAL: 14K writes, 4846 syncs, 2.94 writes per sync, written: 0.03 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 636 writes, 1117 keys, 636 commit groups, 1.0 writes per commit group, ingest: 0.49 MB, 0.00 MB/s#012Interval WAL: 636 writes, 315 syncs, 2.02 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6575> 2026-01-22T15:36:32.157+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6566> 2026-01-22T15:36:33.108+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6552> 2026-01-22T15:36:34.072+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6539> 2026-01-22T15:36:35.113+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,2,11,35,40,67,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,2,11,35,40,67,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6528> 2026-01-22T15:36:36.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6517> 2026-01-22T15:36:37.120+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6504> 2026-01-22T15:36:38.091+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 188 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 188 slow requests (by type [ 'delayed' : 188 ] most affected pool [ 'vms' : 106 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735dbd3c00 session 0x55735b5054a0
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6488> 2026-01-22T15:36:39.118+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,2,2,11,35,40,67,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6478> 2026-01-22T15:36:40.167+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6465> 2026-01-22T15:36:41.181+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6456> 2026-01-22T15:36:42.227+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6443> 2026-01-22T15:36:43.199+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6429> 2026-01-22T15:36:44.227+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 9388032 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: mgrc ms_handle_reset ms_handle_reset con 0x55735a80bc00
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1334415348
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1334415348,v1:192.168.122.100:6801/1334415348]
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: mgrc handle_mgr_configure stats_period=5
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6413> 2026-01-22T15:36:45.251+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,1,3,11,35,40,67,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6399> 2026-01-22T15:36:46.232+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 120 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 120 slow requests (by type [ 'delayed' : 120 ] most affected pool [ 'vms' : 72 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735c5cb800 session 0x55735a69a000
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6386> 2026-01-22T15:36:47.213+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6375> 2026-01-22T15:36:48.226+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6361> 2026-01-22T15:36:49.222+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,2,0,1,0,1,1,3,11,35,39,68,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6349> 2026-01-22T15:36:50.208+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,1,1,3,11,35,39,68,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6339> 2026-01-22T15:36:51.180+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6328> 2026-01-22T15:36:52.193+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6317> 2026-01-22T15:36:53.157+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,0,1,1,3,11,35,39,68,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6302> 2026-01-22T15:36:54.142+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207323136 unmapped: 9330688 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,1,1,3,11,35,39,68,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6290> 2026-01-22T15:36:55.169+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207331328 unmapped: 9322496 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6279> 2026-01-22T15:36:56.151+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207331328 unmapped: 9322496 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 143.398162842s of 148.265136719s, submitted: 21
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6267> 2026-01-22T15:36:57.163+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 9281536 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6256> 2026-01-22T15:36:58.147+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 9281536 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6242> 2026-01-22T15:36:59.137+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207372288 unmapped: 9281536 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6231> 2026-01-22T15:37:00.109+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,1,1,1,3,11,35,39,68,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 9273344 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6219> 2026-01-22T15:37:01.134+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207380480 unmapped: 9273344 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6208> 2026-01-22T15:37:02.178+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207388672 unmapped: 9265152 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735cee5000 session 0x55735c6232c0
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6195> 2026-01-22T15:37:03.168+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207486976 unmapped: 9166848 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6181> 2026-01-22T15:37:04.164+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207503360 unmapped: 9150464 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6168> 2026-01-22T15:37:05.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207536128 unmapped: 9117696 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6157> 2026-01-22T15:37:06.088+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,3,0,2,3,11,34,40,68,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207536128 unmapped: 9117696 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6147> 2026-01-22T15:37:07.073+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207536128 unmapped: 9117696 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6136> 2026-01-22T15:37:08.026+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207536128 unmapped: 9117696 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6122> 2026-01-22T15:37:09.036+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207536128 unmapped: 9117696 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6111> 2026-01-22T15:37:10.066+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207552512 unmapped: 9101312 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6100> 2026-01-22T15:37:11.074+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,0,1,4,11,34,40,68,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207552512 unmapped: 9101312 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6088> 2026-01-22T15:37:12.102+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,1,1,4,11,34,40,68,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207552512 unmapped: 9101312 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6076> 2026-01-22T15:37:13.062+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207552512 unmapped: 9101312 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6062> 2026-01-22T15:37:14.098+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207552512 unmapped: 9101312 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6051> 2026-01-22T15:37:15.148+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207552512 unmapped: 9101312 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6040> 2026-01-22T15:37:16.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 49 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 49 slow requests (by type [ 'delayed' : 49 ] most affected pool [ 'vms' : 29 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 9093120 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6029> 2026-01-22T15:37:17.164+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207560704 unmapped: 9093120 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6018> 2026-01-22T15:37:18.174+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,1,1,3,12,34,40,68,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,1,1,3,12,34,40,68,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -6002> 2026-01-22T15:37:19.125+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5991> 2026-01-22T15:37:20.122+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5980> 2026-01-22T15:37:21.115+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,1,3,12,34,40,68,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5967> 2026-01-22T15:37:22.128+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,3,1,3,12,34,40,68,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5955> 2026-01-22T15:37:23.159+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5941> 2026-01-22T15:37:24.118+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5930> 2026-01-22T15:37:25.098+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 207568896 unmapped: 9084928 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5919> 2026-01-22T15:37:26.067+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 132 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 132 slow requests (by type [ 'delayed' : 132 ] most affected pool [ 'vms' : 78 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 ms_handle_reset con 0x55735caef800 session 0x55735d038d20
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5908> 2026-01-22T15:37:27.041+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,3,0,0,0,1,3,1,3,12,33,41,68,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5898> 2026-01-22T15:37:28.002+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5884> 2026-01-22T15:37:29.043+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,3,0,0,1,3,1,3,12,33,41,68,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5868> 2026-01-22T15:37:30.079+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5857> 2026-01-22T15:37:31.069+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5846> 2026-01-22T15:37:32.026+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,1,3,1,3,12,33,41,68,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5834> 2026-01-22T15:37:33.026+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211517440 unmapped: 5136384 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5820> 2026-01-22T15:37:34.044+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 5128192 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5809> 2026-01-22T15:37:35.011+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 5128192 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,4,1,3,12,33,41,68,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5797> 2026-01-22T15:37:35.989+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 5128192 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5786> 2026-01-22T15:37:36.960+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211525632 unmapped: 5128192 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5775> 2026-01-22T15:37:38.009+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 5120000 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5761> 2026-01-22T15:37:38.994+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,4,1,3,12,33,41,68,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 5120000 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5749> 2026-01-22T15:37:39.981+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 5120000 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5738> 2026-01-22T15:37:40.978+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211533824 unmapped: 5120000 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5727> 2026-01-22T15:37:41.991+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5718> 2026-01-22T15:37:42.952+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5702> 2026-01-22T15:37:43.915+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,2,3,12,33,41,68,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5692> 2026-01-22T15:37:44.943+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5685> 2026-01-22T15:37:45.903+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5674> 2026-01-22T15:37:46.885+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5658> 2026-01-22T15:37:47.935+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5646> 2026-01-22T15:37:48.944+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5635> 2026-01-22T15:37:49.953+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,3,2,3,12,33,40,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5623> 2026-01-22T15:37:50.996+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5612> 2026-01-22T15:37:51.980+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5601> 2026-01-22T15:37:53.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5587> 2026-01-22T15:37:54.027+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211542016 unmapped: 5111808 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5576> 2026-01-22T15:37:54.998+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211550208 unmapped: 5103616 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5567> 2026-01-22T15:37:56.005+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 96 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 96 slow requests (by type [ 'delayed' : 96 ] most affected pool [ 'vms' : 59 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,4,3,12,33,40,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211550208 unmapped: 5103616 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 heartbeat osd_stat(store_statfs(0x1b13dc000/0x0/0x1bfc00000, data 0xb7a2ab3/0xa682000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,4,3,12,33,40,69,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5554> 2026-01-22T15:37:56.999+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211558400 unmapped: 5095424 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5541> 2026-01-22T15:37:58.023+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211558400 unmapped: 5095424 heap: 216653824 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2718644 data_alloc: 218103808 data_used: 13574144
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5528> 2026-01-22T15:37:59.014+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 60.944828033s of 62.401416779s, submitted: 329
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211656704 unmapped: 21782528 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5516> 2026-01-22T15:38:00.016+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211656704 unmapped: 21782528 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5503> 2026-01-22T15:38:01.024+0000 7f47f8ed4640 -1 osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 180 handle_osd_map epochs [180,181], i have 180, src has [1,181]
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 181 handle_osd_map epochs [181,181], i have 181, src has [1,181]
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 181 ms_handle_reset con 0x55735c5f7800 session 0x55735d3912c0
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211664896 unmapped: 21774336 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5487> 2026-01-22T15:38:01.992+0000 7f47f8ed4640 -1 osd.2 181 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 181 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 181 heartbeat osd_stat(store_statfs(0x1b0768000/0x0/0x1bfc00000, data 0xc414729/0xb2f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,3,12,33,40,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211681280 unmapped: 21757952 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 181 handle_osd_map epochs [181,182], i have 181, src has [1,182]
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5472> 2026-01-22T15:38:02.995+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 151 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 151 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 151 slow requests (by type [ 'delayed' : 151 ] most affected pool [ 'vms' : 90 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211705856 unmapped: 21733376 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2813337 data_alloc: 218103808 data_used: 13582336
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5460> 2026-01-22T15:38:03.988+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 182 ms_handle_reset con 0x55735c33dc00 session 0x55735ad96960
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211746816 unmapped: 21692416 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5446> 2026-01-22T15:38:04.958+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211746816 unmapped: 21692416 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5435> 2026-01-22T15:38:06.002+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211746816 unmapped: 21692416 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5424> 2026-01-22T15:38:06.963+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211746816 unmapped: 21692416 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5413> 2026-01-22T15:38:07.955+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 182 heartbeat osd_stat(store_statfs(0x1b13d5000/0x0/0x1bfc00000, data 0xb7a63f3/0xa688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,3,12,33,40,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 182 heartbeat osd_stat(store_statfs(0x1b13d5000/0x0/0x1bfc00000, data 0xb7a63f3/0xa688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,33,40,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211812352 unmapped: 21626880 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2729241 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5399> 2026-01-22T15:38:08.986+0000 7f47f8ed4640 -1 osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 182 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 182 handle_osd_map epochs [182,183], i have 182, src has [1,183]
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.054518700s of 10.458124161s, submitted: 50
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211820544 unmapped: 21618688 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5385> 2026-01-22T15:38:09.988+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211820544 unmapped: 21618688 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,33,40,69,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5371> 2026-01-22T15:38:11.019+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211820544 unmapped: 21618688 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5360> 2026-01-22T15:38:12.006+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211836928 unmapped: 21602304 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5349> 2026-01-22T15:38:12.989+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211836928 unmapped: 21602304 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5335> 2026-01-22T15:38:13.958+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,33,40,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5325> 2026-01-22T15:38:14.992+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5312> 2026-01-22T15:38:16.014+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5301> 2026-01-22T15:38:17.020+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5290> 2026-01-22T15:38:17.981+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,33,40,69,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5275> 2026-01-22T15:38:19.011+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5264> 2026-01-22T15:38:19.981+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5253> 2026-01-22T15:38:20.937+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,33,40,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5241> 2026-01-22T15:38:21.968+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5230> 2026-01-22T15:38:23.012+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5216> 2026-01-22T15:38:23.983+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,33,40,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5202> 2026-01-22T15:38:24.938+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5191> 2026-01-22T15:38:25.981+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5180> 2026-01-22T15:38:27.005+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211845120 unmapped: 21594112 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5169> 2026-01-22T15:38:28.036+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,32,41,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5156> 2026-01-22T15:38:29.022+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,4,4,12,32,41,69,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5144> 2026-01-22T15:38:30.051+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5133> 2026-01-22T15:38:31.047+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5122> 2026-01-22T15:38:32.028+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5111> 2026-01-22T15:38:33.011+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5097> 2026-01-22T15:38:34.059+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5086> 2026-01-22T15:38:35.053+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,7,4,12,32,41,69,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5074> 2026-01-22T15:38:36.021+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5065> 2026-01-22T15:38:37.047+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5054> 2026-01-22T15:38:38.009+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5038> 2026-01-22T15:38:39.031+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5027> 2026-01-22T15:38:40.070+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5016> 2026-01-22T15:38:41.114+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,7,4,12,32,41,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -5004> 2026-01-22T15:38:42.158+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4993> 2026-01-22T15:38:43.159+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4979> 2026-01-22T15:38:44.124+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4968> 2026-01-22T15:38:45.092+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4957> 2026-01-22T15:38:46.130+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4946> 2026-01-22T15:38:47.090+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,7,3,13,32,41,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4934> 2026-01-22T15:38:48.058+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4925> 2026-01-22T15:38:49.036+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4911> 2026-01-22T15:38:50.030+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4898> 2026-01-22T15:38:51.034+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4887> 2026-01-22T15:38:52.030+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4876> 2026-01-22T15:38:53.024+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,4,13,32,41,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4864> 2026-01-22T15:38:54.014+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,4,13,32,41,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4851> 2026-01-22T15:38:55.021+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,3,14,32,41,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4837> 2026-01-22T15:38:55.984+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4827> 2026-01-22T15:38:56.936+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,6,3,13,33,41,69,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4821> 2026-01-22T15:38:57.890+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4811> 2026-01-22T15:38:58.844+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4796> 2026-01-22T15:38:59.887+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4780> 2026-01-22T15:39:00.923+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4777> 2026-01-22T15:39:01.895+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4764> 2026-01-22T15:39:02.893+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,5,13,33,41,69,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4752> 2026-01-22T15:39:03.847+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4738> 2026-01-22T15:39:04.853+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4727> 2026-01-22T15:39:05.831+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4716> 2026-01-22T15:39:06.803+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4707> 2026-01-22T15:39:07.765+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211853312 unmapped: 21585920 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,5,13,32,42,69,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4695> 2026-01-22T15:39:08.724+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4681> 2026-01-22T15:39:09.742+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,5,13,32,42,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4667> 2026-01-22T15:39:10.762+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,5,13,32,42,69,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4655> 2026-01-22T15:39:11.801+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4644> 2026-01-22T15:39:12.793+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4633> 2026-01-22T15:39:13.761+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4619> 2026-01-22T15:39:14.752+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4608> 2026-01-22T15:39:15.704+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4599> 2026-01-22T15:39:16.746+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,6,13,30,44,69,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4587> 2026-01-22T15:39:17.779+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4574> 2026-01-22T15:39:18.805+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4560> 2026-01-22T15:39:19.840+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4551> 2026-01-22T15:39:20.848+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211861504 unmapped: 21577728 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4540> 2026-01-22T15:39:21.812+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,5,14,30,43,70,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4526> 2026-01-22T15:39:22.833+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4515> 2026-01-22T15:39:23.833+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4503> 2026-01-22T15:39:24.826+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 195 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 195 slow requests (by type [ 'delayed' : 195 ] most affected pool [ 'vms' : 111 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735caf0800 session 0x55735afa90e0
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,3,5,14,30,43,70,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4489> 2026-01-22T15:39:25.871+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4476> 2026-01-22T15:39:26.921+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4467> 2026-01-22T15:39:27.908+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4453> 2026-01-22T15:39:28.940+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4439> 2026-01-22T15:39:29.934+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,3,5,14,30,43,70,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4427> 2026-01-22T15:39:30.972+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4416> 2026-01-22T15:39:31.949+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,3,5,14,29,43,71,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4404> 2026-01-22T15:39:32.975+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4395> 2026-01-22T15:39:33.986+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,3,5,14,29,43,71,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4378> 2026-01-22T15:39:34.960+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4373> 2026-01-22T15:39:35.924+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4360> 2026-01-22T15:39:36.937+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4348> 2026-01-22T15:39:37.972+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4339> 2026-01-22T15:39:39.002+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4323> 2026-01-22T15:39:39.975+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 211869696 unmapped: 21569536 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735c33e800 session 0x55735cea8960
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,3,0,0,0,0,1,0,0,0,8,14,29,43,71,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4309> 2026-01-22T15:39:40.997+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4298> 2026-01-22T15:39:41.965+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4287> 2026-01-22T15:39:42.932+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4276> 2026-01-22T15:39:43.940+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4262> 2026-01-22T15:39:44.944+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,1,0,0,8,13,30,43,71,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4250> 2026-01-22T15:39:45.943+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4239> 2026-01-22T15:39:46.966+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,1,0,0,8,10,33,43,71,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4227> 2026-01-22T15:39:47.948+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,3,0,1,0,0,8,10,33,43,71,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4215> 2026-01-22T15:39:48.933+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4207> 2026-01-22T15:39:49.895+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4194> 2026-01-22T15:39:50.871+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4183> 2026-01-22T15:39:51.904+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,0,8,10,33,43,71,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4171> 2026-01-22T15:39:52.903+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4160> 2026-01-22T15:39:53.893+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4146> 2026-01-22T15:39:54.884+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 79 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 79 slow requests (by type [ 'delayed' : 79 ] most affected pool [ 'vms' : 46 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,0,8,10,30,45,72,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4134> 2026-01-22T15:39:55.880+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4125> 2026-01-22T15:39:56.860+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,8,10,22,53,72,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4113> 2026-01-22T15:39:57.857+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215851008 unmapped: 17588224 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4100> 2026-01-22T15:39:58.856+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Jan 22 10:45:44 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3267566899' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4086> 2026-01-22T15:39:59.814+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4075> 2026-01-22T15:40:00.823+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,8,10,21,54,72,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4065> 2026-01-22T15:40:01.867+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4054> 2026-01-22T15:40:02.897+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,8,10,21,54,72,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4040> 2026-01-22T15:40:03.853+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4026> 2026-01-22T15:40:04.859+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,8,10,21,54,72,30])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,8,10,21,54,72,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4013> 2026-01-22T15:40:05.877+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -4002> 2026-01-22T15:40:06.927+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,8,10,21,54,72,30])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3987> 2026-01-22T15:40:07.959+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3978> 2026-01-22T15:40:08.963+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3964> 2026-01-22T15:40:09.974+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 98 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 98 slow requests (by type [ 'delayed' : 98 ] most affected pool [ 'vms' : 61 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3953> 2026-01-22T15:40:10.994+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215859200 unmapped: 17580032 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,0,8,9,22,54,71,31])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3941> 2026-01-22T15:40:11.958+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3936> 2026-01-22T15:40:12.912+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3926> 2026-01-22T15:40:13.935+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3906> 2026-01-22T15:40:14.975+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3897> 2026-01-22T15:40:15.957+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,5,26,54,71,31])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3885> 2026-01-22T15:40:16.998+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3874> 2026-01-22T15:40:18.001+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3863> 2026-01-22T15:40:19.031+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3847> 2026-01-22T15:40:19.996+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,7,6,26,54,71,31])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3837> 2026-01-22T15:40:20.987+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3826> 2026-01-22T15:40:21.983+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3815> 2026-01-22T15:40:22.940+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3810> 2026-01-22T15:40:23.936+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3794> 2026-01-22T15:40:24.891+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,7,6,26,54,71,31])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3782> 2026-01-22T15:40:25.892+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3771> 2026-01-22T15:40:26.927+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3760> 2026-01-22T15:40:27.921+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3751> 2026-01-22T15:40:28.882+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3735> 2026-01-22T15:40:29.929+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,7,6,26,54,71,31])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3720> 2026-01-22T15:40:30.943+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3711> 2026-01-22T15:40:31.951+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3706> 2026-01-22T15:40:32.904+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3690> 2026-01-22T15:40:33.946+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3684> 2026-01-22T15:40:34.925+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,7,6,26,54,71,31])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3670> 2026-01-22T15:40:35.926+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,7,6,26,54,71,31])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3658> 2026-01-22T15:40:36.911+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3647> 2026-01-22T15:40:37.891+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215867392 unmapped: 17571840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3636> 2026-01-22T15:40:38.859+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,7,5,27,54,71,31])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,7,5,27,54,71,31])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3620> 2026-01-22T15:40:39.853+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3609> 2026-01-22T15:40:40.874+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3600> 2026-01-22T15:40:41.888+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3589> 2026-01-22T15:40:42.870+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3576> 2026-01-22T15:40:43.874+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,1,7,5,27,54,71,31])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3563> 2026-01-22T15:40:44.902+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3552> 2026-01-22T15:40:45.902+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,7,5,27,54,71,31])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3540> 2026-01-22T15:40:46.913+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3527> 2026-01-22T15:40:47.907+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,7,5,27,54,71,31])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3517> 2026-01-22T15:40:48.868+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,7,5,27,54,71,31])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3502> 2026-01-22T15:40:49.908+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3491> 2026-01-22T15:40:50.920+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3475> 2026-01-22T15:40:51.951+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,7,5,27,54,71,31])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3465> 2026-01-22T15:40:52.946+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3454> 2026-01-22T15:40:53.953+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3446> 2026-01-22T15:40:54.915+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3433> 2026-01-22T15:40:55.918+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,7,5,27,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3421> 2026-01-22T15:40:56.920+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3410> 2026-01-22T15:40:57.917+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3396> 2026-01-22T15:40:58.965+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3384> 2026-01-22T15:40:59.949+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,7,5,27,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3372> 2026-01-22T15:41:00.947+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3369> 2026-01-22T15:41:01.917+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215875584 unmapped: 17563648 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,6,6,27,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3356> 2026-01-22T15:41:02.914+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,6,6,27,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3343> 2026-01-22T15:41:03.925+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3324> 2026-01-22T15:41:04.972+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3315> 2026-01-22T15:41:05.974+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3304> 2026-01-22T15:41:06.971+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,6,6,27,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3292> 2026-01-22T15:41:07.999+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,8,27,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3280> 2026-01-22T15:41:08.985+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3268> 2026-01-22T15:41:09.995+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3257> 2026-01-22T15:41:11.039+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3244> 2026-01-22T15:41:11.993+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,8,27,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215883776 unmapped: 17555456 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3232> 2026-01-22T15:41:13.025+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3221> 2026-01-22T15:41:14.072+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3207> 2026-01-22T15:41:15.119+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3196> 2026-01-22T15:41:16.115+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,8,27,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3184> 2026-01-22T15:41:17.147+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3173> 2026-01-22T15:41:18.112+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3162> 2026-01-22T15:41:19.131+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3148> 2026-01-22T15:41:20.120+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3137> 2026-01-22T15:41:21.077+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3126> 2026-01-22T15:41:22.065+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,4,8,27,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3114> 2026-01-22T15:41:23.041+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3103> 2026-01-22T15:41:24.073+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3089> 2026-01-22T15:41:25.090+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3078> 2026-01-22T15:41:26.050+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3075> 2026-01-22T15:41:27.061+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,9,27,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3062> 2026-01-22T15:41:28.016+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3048> 2026-01-22T15:41:29.057+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3034> 2026-01-22T15:41:30.044+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,9,27,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3022> 2026-01-22T15:41:31.041+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3013> 2026-01-22T15:41:31.995+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -3000> 2026-01-22T15:41:33.015+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2989> 2026-01-22T15:41:34.044+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,9,27,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,3,9,27,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2975> 2026-01-22T15:41:35.081+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2964> 2026-01-22T15:41:36.064+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2951> 2026-01-22T15:41:37.070+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 199 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 199 slow requests (by type [ 'delayed' : 199 ] most affected pool [ 'vms' : 112 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 215891968 unmapped: 17547264 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735a80b800 session 0x55735c6230e0
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2938> 2026-01-22T15:41:38.081+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,5,0,0,0,0,0,0,3,4,8,28,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2926> 2026-01-22T15:41:39.081+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2912> 2026-01-22T15:41:40.050+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,5,0,0,0,0,3,4,8,28,53,67,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2900> 2026-01-22T15:41:41.056+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2889> 2026-01-22T15:41:42.064+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2878> 2026-01-22T15:41:43.017+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,5,0,0,0,3,4,8,24,55,69,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2866> 2026-01-22T15:41:44.063+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2852> 2026-01-22T15:41:45.095+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2841> 2026-01-22T15:41:46.102+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2832> 2026-01-22T15:41:47.101+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,5,0,0,3,4,8,24,55,69,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2820> 2026-01-22T15:41:48.076+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735c5ce400 session 0x55735a319860
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2807> 2026-01-22T15:41:49.104+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2791> 2026-01-22T15:41:50.093+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,5,0,0,3,1,11,24,55,69,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2779> 2026-01-22T15:41:51.059+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2768> 2026-01-22T15:41:52.098+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2757> 2026-01-22T15:41:53.071+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,0,0,0,4,11,23,56,69,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2745> 2026-01-22T15:41:54.030+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2731> 2026-01-22T15:41:55.042+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2720> 2026-01-22T15:41:56.075+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,5,0,0,4,11,23,56,69,36])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2708> 2026-01-22T15:41:57.120+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2694> 2026-01-22T15:41:58.125+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,0,0,4,11,23,56,69,36])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2684> 2026-01-22T15:41:59.126+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2670> 2026-01-22T15:42:00.141+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,5,0,0,4,11,23,55,70,36])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 216506368 unmapped: 16932864 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2664> 2026-01-22T15:42:01.100+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735a6e5c00 session 0x55735d390f00
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2649> 2026-01-22T15:42:02.095+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2635> 2026-01-22T15:42:03.134+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2626> 2026-01-22T15:42:04.171+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2612> 2026-01-22T15:42:05.150+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,6,0,0,4,11,23,55,70,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2606> 2026-01-22T15:42:06.108+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2593> 2026-01-22T15:42:07.099+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 6 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 6 slow requests (by type [ 'delayed' : 6 ] most affected pool [ 'vms' : 5 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,6,0,0,4,11,23,55,70,36])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2578> 2026-01-22T15:42:08.145+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2569> 2026-01-22T15:42:09.178+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2555> 2026-01-22T15:42:10.176+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2544> 2026-01-22T15:42:11.130+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,5,0,4,11,20,58,70,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2532> 2026-01-22T15:42:12.157+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2523> 2026-01-22T15:42:13.191+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,5,0,4,11,20,58,70,36])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2511> 2026-01-22T15:42:14.160+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2501> 2026-01-22T15:42:15.120+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2488> 2026-01-22T15:42:16.107+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2474> 2026-01-22T15:42:17.149+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2471> 2026-01-22T15:42:18.112+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 7 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] RETRY=4 ondisk+retry+read+known_if_redirected+supports_pool_eio e153)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 7 slow requests (by type [ 'delayed' : 7 ] most affected pool [ 'vms' : 6 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,5,0,4,11,20,58,70,36])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219652096 unmapped: 13787136 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2457> 2026-01-22T15:42:19.127+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,5,0,4,11,20,58,70,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2444> 2026-01-22T15:42:20.128+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2433> 2026-01-22T15:42:21.123+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2420> 2026-01-22T15:42:22.117+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2409> 2026-01-22T15:42:23.111+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2394> 2026-01-22T15:42:24.161+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2382> 2026-01-22T15:42:25.204+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,6,0,4,11,20,55,73,36])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2370> 2026-01-22T15:42:26.180+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 219660288 unmapped: 13778944 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2359> 2026-01-22T15:42:27.211+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 179 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 179 slow requests (by type [ 'delayed' : 179 ] most affected pool [ 'vms' : 104 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735c5fb400 session 0x55735b6a8000
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2346> 2026-01-22T15:42:28.180+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,6,0,4,11,20,55,73,36])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2334> 2026-01-22T15:42:29.192+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2320> 2026-01-22T15:42:30.148+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2309> 2026-01-22T15:42:31.176+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2298> 2026-01-22T15:42:32.181+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2287> 2026-01-22T15:42:33.213+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2274> 2026-01-22T15:42:34.244+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,7,0,4,11,20,55,73,36])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2259> 2026-01-22T15:42:35.238+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2248> 2026-01-22T15:42:36.233+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,7,0,4,11,20,55,73,36])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2236> 2026-01-22T15:42:37.241+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,7,0,4,11,20,55,73,36])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2224> 2026-01-22T15:42:38.238+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2215> 2026-01-22T15:42:39.241+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2201> 2026-01-22T15:42:40.235+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2190> 2026-01-22T15:42:41.269+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,7,0,4,11,20,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2178> 2026-01-22T15:42:42.269+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,7,0,4,11,20,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2166> 2026-01-22T15:42:43.282+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,2,5,4,11,20,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2154> 2026-01-22T15:42:44.332+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2140> 2026-01-22T15:42:45.295+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,5,4,11,20,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2128> 2026-01-22T15:42:46.258+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,5,4,11,20,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2116> 2026-01-22T15:42:47.237+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2105> 2026-01-22T15:42:48.222+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2094> 2026-01-22T15:42:49.213+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2078> 2026-01-22T15:42:50.263+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2067> 2026-01-22T15:42:51.307+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,5,4,11,20,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2055> 2026-01-22T15:42:52.329+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,2,5,4,11,20,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2043> 2026-01-22T15:42:53.285+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2032> 2026-01-22T15:42:54.317+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,6,4,11,20,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2017> 2026-01-22T15:42:55.357+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -2006> 2026-01-22T15:42:56.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1995> 2026-01-22T15:42:57.339+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 158 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 158 slow requests (by type [ 'delayed' : 158 ] most affected pool [ 'vms' : 93 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1984> 2026-01-22T15:42:58.294+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,6,4,11,20,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1974> 2026-01-22T15:42:59.270+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1960> 2026-01-22T15:43:00.298+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1949> 2026-01-22T15:43:01.292+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1938> 2026-01-22T15:43:02.263+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1927> 2026-01-22T15:43:03.266+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,6,4,11,20,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1915> 2026-01-22T15:43:04.218+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1901> 2026-01-22T15:43:05.195+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1890> 2026-01-22T15:43:06.222+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1879> 2026-01-22T15:43:07.257+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1868> 2026-01-22T15:43:08.223+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,10,21,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1856> 2026-01-22T15:43:09.242+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1842> 2026-01-22T15:43:10.234+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1831> 2026-01-22T15:43:11.281+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,10,21,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1819> 2026-01-22T15:43:12.322+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1808> 2026-01-22T15:43:13.329+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1797> 2026-01-22T15:43:14.325+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,10,21,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1784> 2026-01-22T15:43:15.370+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,10,21,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1770> 2026-01-22T15:43:16.403+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1761> 2026-01-22T15:43:17.413+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1750> 2026-01-22T15:43:18.424+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,9,22,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1736> 2026-01-22T15:43:19.412+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,9,22,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1721> 2026-01-22T15:43:20.447+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1710> 2026-01-22T15:43:21.408+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1699> 2026-01-22T15:43:22.386+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1688> 2026-01-22T15:43:23.353+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1677> 2026-01-22T15:43:24.378+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1663> 2026-01-22T15:43:25.371+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,9,22,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1651> 2026-01-22T15:43:26.338+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,9,22,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1639> 2026-01-22T15:43:27.388+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1628> 2026-01-22T15:43:28.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1617> 2026-01-22T15:43:29.400+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,4,9,22,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1600> 2026-01-22T15:43:30.392+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1589> 2026-01-22T15:43:31.394+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1578> 2026-01-22T15:43:32.358+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1567> 2026-01-22T15:43:33.381+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1556> 2026-01-22T15:43:34.393+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1542> 2026-01-22T15:43:35.382+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8,4,9,22,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1532> 2026-01-22T15:43:36.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1521> 2026-01-22T15:43:37.325+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1510> 2026-01-22T15:43:38.359+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1499> 2026-01-22T15:43:39.379+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8,4,9,22,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1484> 2026-01-22T15:43:40.354+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8,4,8,23,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1472> 2026-01-22T15:43:41.354+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1461> 2026-01-22T15:43:42.329+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1450> 2026-01-22T15:43:43.354+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1437> 2026-01-22T15:43:44.402+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8,4,8,23,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1422> 2026-01-22T15:43:45.447+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1413> 2026-01-22T15:43:46.401+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1402> 2026-01-22T15:43:47.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,8,3,9,23,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1390> 2026-01-22T15:43:48.384+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1379> 2026-01-22T15:43:49.421+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1365> 2026-01-22T15:43:50.467+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1354> 2026-01-22T15:43:51.492+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,8,9,23,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1342> 2026-01-22T15:43:52.460+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,8,9,23,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1328> 2026-01-22T15:43:53.505+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1319> 2026-01-22T15:43:54.465+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1305> 2026-01-22T15:43:55.486+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1294> 2026-01-22T15:43:56.535+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1283> 2026-01-22T15:43:57.501+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,3,8,9,23,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1271> 2026-01-22T15:43:58.528+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1260> 2026-01-22T15:43:59.500+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,9,9,23,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1245> 2026-01-22T15:44:00.547+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,9,9,23,55,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1233> 2026-01-22T15:44:01.532+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1222> 2026-01-22T15:44:02.529+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1211> 2026-01-22T15:44:03.520+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1200> 2026-01-22T15:44:04.529+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1186> 2026-01-22T15:44:05.515+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1177> 2026-01-22T15:44:06.473+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,6,12,18,60,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,6,12,18,60,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1160> 2026-01-22T15:44:07.460+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1149> 2026-01-22T15:44:08.472+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1140> 2026-01-22T15:44:09.440+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1126> 2026-01-22T15:44:10.456+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,6,12,18,60,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1114> 2026-01-22T15:44:11.461+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,6,12,18,60,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1100> 2026-01-22T15:44:12.501+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,7,12,18,60,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1090> 2026-01-22T15:44:13.536+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223862784 unmapped: 9576448 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1079> 2026-01-22T15:44:14.526+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 207 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 207 slow requests (by type [ 'delayed' : 207 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735c338000 session 0x55735c623e00
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1063> 2026-01-22T15:44:15.488+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1052> 2026-01-22T15:44:16.508+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1041> 2026-01-22T15:44:17.459+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,0,0,1,7,12,17,61,71,38])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1029> 2026-01-22T15:44:18.447+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1018> 2026-01-22T15:44:19.495+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: -1004> 2026-01-22T15:44:20.485+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,0,1,7,12,17,61,70,39])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -992> 2026-01-22T15:44:21.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -981> 2026-01-22T15:44:22.510+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -972> 2026-01-22T15:44:23.506+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -959> 2026-01-22T15:44:24.508+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -945> 2026-01-22T15:44:25.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -934> 2026-01-22T15:44:26.494+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,1,7,12,17,60,71,39])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -923> 2026-01-22T15:44:27.455+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -912> 2026-01-22T15:44:28.436+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -901> 2026-01-22T15:44:29.412+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -885> 2026-01-22T15:44:30.435+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -876> 2026-01-22T15:44:31.468+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -865> 2026-01-22T15:44:32.430+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,1,7,12,16,61,70,40])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -853> 2026-01-22T15:44:33.395+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -842> 2026-01-22T15:44:34.368+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -831> 2026-01-22T15:44:35.370+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -817> 2026-01-22T15:44:36.393+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,1,7,12,16,61,70,40])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -805> 2026-01-22T15:44:37.387+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,1,7,12,16,61,70,40])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -791> 2026-01-22T15:44:38.399+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -780> 2026-01-22T15:44:39.417+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -769> 2026-01-22T15:44:40.376+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -755> 2026-01-22T15:44:41.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -744> 2026-01-22T15:44:42.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,0,8,11,17,61,70,40])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -734> 2026-01-22T15:44:43.361+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -723> 2026-01-22T15:44:44.370+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 127 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 127 slow requests (by type [ 'delayed' : 127 ] most affected pool [ 'vms' : 77 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -712> 2026-01-22T15:44:45.362+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -698> 2026-01-22T15:44:46.353+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -687> 2026-01-22T15:44:47.366+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -678> 2026-01-22T15:44:48.409+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,11,17,61,70,40])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -666> 2026-01-22T15:44:49.397+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224059392 unmapped: 9379840 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -653> 2026-01-22T15:44:50.372+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -639> 2026-01-22T15:44:51.389+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,11,17,61,69,41])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -627> 2026-01-22T15:44:52.408+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -616> 2026-01-22T15:44:53.373+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -605> 2026-01-22T15:44:54.409+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,11,17,61,69,41])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -593> 2026-01-22T15:44:55.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -579> 2026-01-22T15:44:56.421+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -568> 2026-01-22T15:44:57.461+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -557> 2026-01-22T15:44:58.419+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -546> 2026-01-22T15:44:59.428+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -535> 2026-01-22T15:45:00.451+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,11,16,62,69,41])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -520> 2026-01-22T15:45:01.490+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,11,16,62,69,41])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -508> 2026-01-22T15:45:02.443+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -495> 2026-01-22T15:45:03.433+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -484> 2026-01-22T15:45:04.405+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -473> 2026-01-22T15:45:05.418+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -459> 2026-01-22T15:45:06.440+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -448> 2026-01-22T15:45:07.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,11,16,62,69,41])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -438> 2026-01-22T15:45:08.506+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -427> 2026-01-22T15:45:09.478+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -416> 2026-01-22T15:45:10.514+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,4,0,8,11,16,62,69,41])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -401> 2026-01-22T15:45:11.486+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -390> 2026-01-22T15:45:12.521+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 211 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 211 slow requests (by type [ 'delayed' : 211 ] most affected pool [ 'vms' : 119 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 ms_handle_reset con 0x55735c339800 session 0x55735d10ed20
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -377> 2026-01-22T15:45:13.502+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -366> 2026-01-22T15:45:14.493+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -355> 2026-01-22T15:45:15.543+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -341> 2026-01-22T15:45:16.495+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,4,0,8,11,16,62,69,41])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -329> 2026-01-22T15:45:17.455+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -318> 2026-01-22T15:45:18.479+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -307> 2026-01-22T15:45:19.439+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -296> 2026-01-22T15:45:20.471+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -280> 2026-01-22T15:45:21.521+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,4,8,11,16,62,69,41])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -270> 2026-01-22T15:45:22.486+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -259> 2026-01-22T15:45:23.508+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -248> 2026-01-22T15:45:24.498+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,4,8,10,17,62,69,41])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -236> 2026-01-22T15:45:25.512+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -222> 2026-01-22T15:45:26.653+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -211> 2026-01-22T15:45:27.698+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -202> 2026-01-22T15:45:28.738+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,4,8,10,17,62,69,41])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -188> 2026-01-22T15:45:29.725+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -177> 2026-01-22T15:45:30.681+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -163> 2026-01-22T15:45:31.693+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -152> 2026-01-22T15:45:32.651+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -141> 2026-01-22T15:45:33.627+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -130> 2026-01-22T15:45:34.656+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,8,8,19,62,69,41])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -116> 2026-01-22T15:45:35.651+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:  -102> 2026-01-22T15:45:36.648+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -91> 2026-01-22T15:45:37.698+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -80> 2026-01-22T15:45:38.745+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -71> 2026-01-22T15:45:39.793+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,8,8,19,62,69,41])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: bluestore.MempoolThread(0x557358e83b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2732215 data_alloc: 218103808 data_used: 13586432
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 223854592 unmapped: 9584640 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -55> 2026-01-22T15:45:40.815+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -43> 2026-01-22T15:45:41.808+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224100352 unmapped: 9338880 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: do_command 'config diff' '{prefix=config diff}'
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: do_command 'config show' '{prefix=config show}'
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: do_command 'counter dump' '{prefix=counter dump}'
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: do_command 'counter schema' '{prefix=counter schema}'
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,8,8,19,62,69,41])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224288768 unmapped: 9150464 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -22> 2026-01-22T15:45:42.856+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 177 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 177 slow requests (by type [ 'delayed' : 177 ] most affected pool [ 'vms' : 100 ])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]:   -12> 2026-01-22T15:45:43.825+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: prioritycache tune_memory target: 4294967296 mapped: 224411648 unmapped: 9027584 heap: 233439232 old mem: 2845415832 new mem: 2845415832
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 heartbeat osd_stat(store_statfs(0x1b13d2000/0x0/0x1bfc00000, data 0xb7a7f4f/0xa68b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x419f9c6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,4,8,8,19,62,69,41])
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: do_command 'log dump' '{prefix=log dump}'
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:44 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:44 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:44.859+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Jan 22 10:45:45 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/441006344' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 22 10:45:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:45.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Jan 22 10:45:45 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/274536513' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 22 10:45:45 np0005592159 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:45 np0005592159 ceph-mon[77081]: Health check update: 177 slow ops, oldest one blocked for 7733 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:45 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:45 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:45 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:45.849+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:45 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:45 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:45:45 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:45.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:45:45 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Jan 22 10:45:45 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/786694620' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 22 10:45:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Jan 22 10:45:46 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2361371598' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 22 10:45:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 22 10:45:46 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1078206682' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 10:45:46 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Jan 22 10:45:46 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1216321723' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 22 10:45:46 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:46 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:46 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:46.845+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:47 np0005592159 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:45:47.287 143497 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Jan 22 10:45:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:45:47.287 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Jan 22 10:45:47 np0005592159 ovn_metadata_agent[143492]: 2026-01-22 15:45:47.287 143497 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Jan 22 10:45:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:47.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:47 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:47 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:47.844+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:47 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:47 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0) v1
Jan 22 10:45:47 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1073887977' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 22 10:45:47 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:47 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:47 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:47.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:48 np0005592159 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Jan 22 10:45:48 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1048087941' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 22 10:45:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Jan 22 10:45:48 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4074908786' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 22 10:45:48 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Jan 22 10:45:48 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4169395491' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 22 10:45:48 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:48 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:48 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:48.830+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Jan 22 10:45:49 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3388168716' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 22 10:45:49 np0005592159 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Jan 22 10:45:49 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1693688001' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 22 10:45:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:49.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Jan 22 10:45:49 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1153195434' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 22 10:45:49 np0005592159 systemd[1]: Starting Hostname Service...
Jan 22 10:45:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Jan 22 10:45:49 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2448017329' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 22 10:45:49 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Jan 22 10:45:49 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2247528220' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 22 10:45:49 np0005592159 systemd[1]: Started Hostname Service.
Jan 22 10:45:49 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:49 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:49 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:49.858+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:49 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:49 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:49 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:49.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3813666031' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0) v1
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1703807545' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0) v1
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1371623889' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/317296694' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 22 10:45:50 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:50 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:50 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:50.813+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0) v1
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/917443970' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0) v1
Jan 22 10:45:50 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4082909003' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 22 10:45:51 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 10:45:51 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 10:45:51 np0005592159 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:51 np0005592159 ceph-mon[77081]: Health check update: 212 slow ops, oldest one blocked for 7738 sec, osd.2 has slow ops (SLOW_OPS)
Jan 22 10:45:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:51.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:51 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:51 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:51 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:51.766+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:51 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 10:45:51 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 10:45:51 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:51 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:51 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:51.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:52 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 10:45:52 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 10:45:52 np0005592159 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:52 np0005592159 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:52 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0) v1
Jan 22 10:45:52 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4237834724' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 22 10:45:52 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:52 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:52.814+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:52 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0) v1
Jan 22 10:45:53 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3956659008' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 22 10:45:53 np0005592159 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:53.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0) v1
Jan 22 10:45:53 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2645925705' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 22 10:45:53 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:53.829+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:53 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:53 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:53 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0) v1
Jan 22 10:45:53 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4060242858' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 22 10:45:53 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:53 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.001000026s ======
Jan 22 10:45:53 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.100 - anonymous [22/Jan/2026:15:45:53.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 22 10:45:54 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 22 10:45:54 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 22 10:45:54 np0005592159 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:54 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 22 10:45:54 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 22 10:45:54 np0005592159 ceph-088fe176-0106-5401-803c-2da38b73b76a-osd-2[79775]: 2026-01-22T15:45:54.873+0000 7f47f8ed4640 -1 osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:54 np0005592159 ceph-osd[79779]: osd.2 183 get_health_metrics reporting 212 slow ops, oldest is osd_op(client.14140.0:10 2.12 2:4e99cc3e:::rbd_mirror_snapshot_schedule:head [omap-get-vals in=16b] snapc 0=[] ondisk+read+known_if_redirected+supports_pool_eio e50)
Jan 22 10:45:54 np0005592159 ceph-osd[79779]: log_channel(cluster) log [WRN] : 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0) v1
Jan 22 10:45:55 np0005592159 ceph-mon[77081]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3195370970' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 22 10:45:55 np0005592159 ceph-mon[77081]: mon.compute-2@1(peon).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 22 10:45:55 np0005592159 radosgw[80769]: ====== starting new request req=0x7f935e56e6f0 =====
Jan 22 10:45:55 np0005592159 radosgw[80769]: ====== req done req=0x7f935e56e6f0 op status=0 http_status=200 latency=0.000000000s ======
Jan 22 10:45:55 np0005592159 radosgw[80769]: beast: 0x7f935e56e6f0: 192.168.122.102 - anonymous [22/Jan/2026:15:45:55.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 22 10:45:55 np0005592159 ceph-mon[77081]: 212 slow requests (by type [ 'delayed' : 212 ] most affected pool [ 'vms' : 120 ])
Jan 22 10:45:55 np0005592159 ceph-mon[77081]: Health check update: 212 slow ops, oldest one blocked for 7743 sec, osd.2 has slow ops (SLOW_OPS)
